00:00:00.001 Started by upstream project "autotest-per-patch" build number 132840 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.070 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.071 The recommended git tool is: git 00:00:00.071 using credential 00000000-0000-0000-0000-000000000002 00:00:00.073 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.135 Fetching changes from the remote Git repository 00:00:00.136 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.189 Using shallow fetch with depth 1 00:00:00.189 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.189 > git --version # timeout=10 00:00:00.243 > git --version # 'git version 2.39.2' 00:00:00.243 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.274 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.274 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.483 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.498 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.508 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.508 > git config core.sparsecheckout # timeout=10 00:00:05.521 > git read-tree -mu HEAD # timeout=10 00:00:05.540 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.559 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.560 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.693 [Pipeline] Start of Pipeline 00:00:05.705 [Pipeline] library 00:00:05.706 Loading library shm_lib@master 00:00:05.706 Library shm_lib@master is cached. Copying from home. 00:00:05.718 [Pipeline] node 00:00:05.738 Running on VM-host-WFP1 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.740 [Pipeline] { 00:00:05.748 [Pipeline] catchError 00:00:05.749 [Pipeline] { 00:00:05.758 [Pipeline] wrap 00:00:05.765 [Pipeline] { 00:00:05.772 [Pipeline] stage 00:00:05.773 [Pipeline] { (Prologue) 00:00:05.789 [Pipeline] echo 00:00:05.790 Node: VM-host-WFP1 00:00:05.796 [Pipeline] cleanWs 00:00:05.805 [WS-CLEANUP] Deleting project workspace... 00:00:05.805 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.812 [WS-CLEANUP] done 00:00:06.041 [Pipeline] setCustomBuildProperty 00:00:06.117 [Pipeline] httpRequest 00:00:07.917 [Pipeline] echo 00:00:07.918 Sorcerer 10.211.164.101 is alive 00:00:07.926 [Pipeline] retry 00:00:07.927 [Pipeline] { 00:00:07.938 [Pipeline] httpRequest 00:00:07.943 HttpMethod: GET 00:00:07.943 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.944 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.951 Response Code: HTTP/1.1 200 OK 00:00:07.951 Success: Status code 200 is in the accepted range: 200,404 00:00:07.951 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.633 [Pipeline] } 00:00:09.647 [Pipeline] // retry 00:00:09.656 [Pipeline] sh 00:00:09.947 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.960 [Pipeline] httpRequest 00:00:12.976 [Pipeline] echo 00:00:12.977 Sorcerer 10.211.164.101 is dead 00:00:12.983 [Pipeline] httpRequest 00:00:13.524 [Pipeline] echo 00:00:13.525 Sorcerer 10.211.164.20 is alive 00:00:13.532 [Pipeline] retry 00:00:13.534 [Pipeline] { 00:00:13.547 [Pipeline] httpRequest 00:00:13.552 HttpMethod: GET 00:00:13.553 URL: http://10.211.164.20/packages/spdk_2104eacf0c136776cfdaa3ea9c187a7522b3ede0.tar.gz 00:00:13.555 Sending request to url: http://10.211.164.20/packages/spdk_2104eacf0c136776cfdaa3ea9c187a7522b3ede0.tar.gz 00:00:13.556 Response Code: HTTP/1.1 404 Not Found 00:00:13.556 Success: Status code 404 is in the accepted range: 200,404 00:00:13.557 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_2104eacf0c136776cfdaa3ea9c187a7522b3ede0.tar.gz 00:00:13.560 [Pipeline] } 00:00:13.576 [Pipeline] // retry 00:00:13.582 [Pipeline] sh 00:00:13.862 + rm -f spdk_2104eacf0c136776cfdaa3ea9c187a7522b3ede0.tar.gz 00:00:13.876 [Pipeline] retry 00:00:13.879 [Pipeline] { 00:00:13.899 [Pipeline] checkout 00:00:13.907 The recommended git tool is: NONE 00:00:13.920 using credential 00000000-0000-0000-0000-000000000002 00:00:13.922 Wiping out workspace first. 00:00:13.931 Cloning the remote Git repository 00:00:13.934 Honoring refspec on initial clone 00:00:13.936 Cloning repository https://review.spdk.io/gerrit/a/spdk/spdk 00:00:13.937 > git init /var/jenkins/workspace/nvme-vg-autotest/spdk # timeout=10 00:00:13.947 Using reference repository: /var/ci_repos/spdk_multi 00:00:13.947 Fetching upstream changes from https://review.spdk.io/gerrit/a/spdk/spdk 00:00:13.947 > git --version # timeout=10 00:00:13.952 > git --version # 'git version 2.25.1' 00:00:13.952 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:13.957 Setting http proxy: proxy-dmz.intel.com:911 00:00:13.957 > git fetch --tags --force --progress -- https://review.spdk.io/gerrit/a/spdk/spdk refs/changes/24/25524/6 +refs/heads/master:refs/remotes/origin/master # timeout=10 00:00:46.613 Avoid second fetch 00:00:46.629 Checking out Revision 2104eacf0c136776cfdaa3ea9c187a7522b3ede0 (FETCH_HEAD) 00:00:46.856 Commit message: "test/check_so_deps: use VERSION to look for prior tags" 00:00:46.587 > git config remote.origin.url https://review.spdk.io/gerrit/a/spdk/spdk # timeout=10 00:00:46.593 > git config --add remote.origin.fetch refs/changes/24/25524/6 # timeout=10 00:00:46.599 > git config --add remote.origin.fetch +refs/heads/master:refs/remotes/origin/master # timeout=10 00:00:46.614 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:46.624 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:46.630 > git config core.sparsecheckout # timeout=10 00:00:46.633 > git checkout -f 2104eacf0c136776cfdaa3ea9c187a7522b3ede0 # timeout=10 00:00:46.857 > git rev-list --no-walk cec5ba284b55d19c90359936d77b707e398829f7 # timeout=10 00:00:46.876 > git remote # timeout=10 00:00:46.880 > git submodule init # timeout=10 00:00:46.967 > git submodule sync # timeout=10 00:00:47.043 > git config --get remote.origin.url # timeout=10 00:00:47.052 > git submodule init # timeout=10 00:00:47.122 > git config -f .gitmodules --get-regexp ^submodule\.(.+)\.url # timeout=10 00:00:47.125 > git config --get submodule.dpdk.url # timeout=10 00:00:47.131 > git remote # timeout=10 00:00:47.135 > git config --get remote.origin.url # timeout=10 00:00:47.138 > git config -f .gitmodules --get submodule.dpdk.path # timeout=10 00:00:47.142 > git config --get submodule.intel-ipsec-mb.url # timeout=10 00:00:47.147 > git remote # timeout=10 00:00:47.149 > git config --get remote.origin.url # timeout=10 00:00:47.154 > git config -f .gitmodules --get submodule.intel-ipsec-mb.path # timeout=10 00:00:47.157 > git config --get submodule.isa-l.url # timeout=10 00:00:47.162 > git remote # timeout=10 00:00:47.168 > git config --get remote.origin.url # timeout=10 00:00:47.173 > git config -f .gitmodules --get submodule.isa-l.path # timeout=10 00:00:47.177 > git config --get submodule.ocf.url # timeout=10 00:00:47.183 > git remote # timeout=10 00:00:47.189 > git config --get remote.origin.url # timeout=10 00:00:47.195 > git config -f .gitmodules --get submodule.ocf.path # timeout=10 00:00:47.199 > git config --get submodule.libvfio-user.url # timeout=10 00:00:47.204 > git remote # timeout=10 00:00:47.211 > git config --get remote.origin.url # timeout=10 00:00:47.213 > git config -f .gitmodules --get submodule.libvfio-user.path # timeout=10 00:00:47.216 > git config --get submodule.xnvme.url # timeout=10 00:00:47.220 > git remote # timeout=10 00:00:47.223 > git config --get remote.origin.url # timeout=10 00:00:47.228 > git config -f .gitmodules --get submodule.xnvme.path # timeout=10 00:00:47.230 > git config --get submodule.isa-l-crypto.url # timeout=10 00:00:47.233 > git remote # timeout=10 00:00:47.240 > git config --get remote.origin.url # timeout=10 00:00:47.245 > git config -f .gitmodules --get submodule.isa-l-crypto.path # timeout=10 00:00:47.252 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:47.252 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:47.252 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:47.252 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:47.252 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:47.252 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:47.253 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:47.257 Setting http proxy: proxy-dmz.intel.com:911 00:00:47.258 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi libvfio-user # timeout=10 00:00:47.258 Setting http proxy: proxy-dmz.intel.com:911 00:00:47.258 Setting http proxy: proxy-dmz.intel.com:911 00:00:47.258 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l-crypto # timeout=10 00:00:47.258 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l # timeout=10 00:00:47.258 Setting http proxy: proxy-dmz.intel.com:911 00:00:47.258 Setting http proxy: proxy-dmz.intel.com:911 00:00:47.258 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi ocf # timeout=10 00:00:47.258 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi dpdk # timeout=10 00:00:47.258 Setting http proxy: proxy-dmz.intel.com:911 00:00:47.258 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi xnvme # timeout=10 00:00:47.259 Setting http proxy: proxy-dmz.intel.com:911 00:00:47.259 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi intel-ipsec-mb # timeout=10 00:01:15.055 [Pipeline] dir 00:01:15.055 Running in /var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:15.057 [Pipeline] { 00:01:15.070 [Pipeline] sh 00:01:15.350 ++ nproc 00:01:15.350 + threads=112 00:01:15.350 + git repack -a -d --threads=112 00:01:23.471 + git submodule foreach git repack -a -d --threads=112 00:01:23.471 Entering 'dpdk' 00:01:26.760 Entering 'intel-ipsec-mb' 00:01:27.019 Entering 'isa-l' 00:01:27.278 Entering 'isa-l-crypto' 00:01:27.539 Entering 'libvfio-user' 00:01:27.539 Entering 'ocf' 00:01:28.109 Entering 'xnvme' 00:01:28.109 + find .git -type f -name alternates -print -delete 00:01:28.109 .git/objects/info/alternates 00:01:28.109 .git/modules/isa-l-crypto/objects/info/alternates 00:01:28.109 .git/modules/ocf/objects/info/alternates 00:01:28.109 .git/modules/libvfio-user/objects/info/alternates 00:01:28.109 .git/modules/xnvme/objects/info/alternates 00:01:28.109 .git/modules/intel-ipsec-mb/objects/info/alternates 00:01:28.109 .git/modules/dpdk/objects/info/alternates 00:01:28.109 .git/modules/isa-l/objects/info/alternates 00:01:28.120 [Pipeline] } 00:01:28.137 [Pipeline] // dir 00:01:28.142 [Pipeline] } 00:01:28.158 [Pipeline] // retry 00:01:28.166 [Pipeline] sh 00:01:28.449 + hash pigz 00:01:28.449 + tar -czf spdk_2104eacf0c136776cfdaa3ea9c187a7522b3ede0.tar.gz spdk 00:01:40.669 [Pipeline] retry 00:01:40.671 [Pipeline] { 00:01:40.685 [Pipeline] httpRequest 00:01:40.692 HttpMethod: PUT 00:01:40.693 URL: http://10.211.164.20/cgi-bin/sorcerer.py?group=packages&filename=spdk_2104eacf0c136776cfdaa3ea9c187a7522b3ede0.tar.gz 00:01:40.693 Sending request to url: http://10.211.164.20/cgi-bin/sorcerer.py?group=packages&filename=spdk_2104eacf0c136776cfdaa3ea9c187a7522b3ede0.tar.gz 00:02:07.282 Response Code: HTTP/1.1 200 OK 00:02:07.293 Success: Status code 200 is in the accepted range: 200 00:02:07.295 [Pipeline] } 00:02:07.312 [Pipeline] // retry 00:02:07.319 [Pipeline] echo 00:02:07.321 00:02:07.321 Locking 00:02:07.321 Waited 16s for lock 00:02:07.321 File already exists: /storage/packages/spdk_2104eacf0c136776cfdaa3ea9c187a7522b3ede0.tar.gz 00:02:07.321 00:02:07.325 [Pipeline] sh 00:02:07.606 + git -C spdk log --oneline -n5 00:02:07.606 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:02:07.606 66289a6db build: use VERSION file for storing version 00:02:07.606 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:02:07.606 cec5ba284 nvme/rdma: Register UMR per IO request 00:02:07.606 7219bd1a7 thread: use extended version of fd group add 00:02:07.617 [Pipeline] writeFile 00:02:07.626 [Pipeline] sh 00:02:07.902 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:07.914 [Pipeline] sh 00:02:08.198 + cat autorun-spdk.conf 00:02:08.198 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:08.198 SPDK_TEST_NVME=1 00:02:08.198 SPDK_TEST_FTL=1 00:02:08.198 SPDK_TEST_ISAL=1 00:02:08.198 SPDK_RUN_ASAN=1 00:02:08.198 SPDK_RUN_UBSAN=1 00:02:08.198 SPDK_TEST_XNVME=1 00:02:08.198 SPDK_TEST_NVME_FDP=1 00:02:08.198 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:08.205 RUN_NIGHTLY=0 00:02:08.207 [Pipeline] } 00:02:08.221 [Pipeline] // stage 00:02:08.238 [Pipeline] stage 00:02:08.240 [Pipeline] { (Run VM) 00:02:08.252 [Pipeline] sh 00:02:08.539 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:08.539 + echo 'Start stage prepare_nvme.sh' 00:02:08.539 Start stage prepare_nvme.sh 00:02:08.539 + [[ -n 4 ]] 00:02:08.539 + disk_prefix=ex4 00:02:08.539 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:02:08.539 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:02:08.539 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:02:08.539 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:08.539 ++ SPDK_TEST_NVME=1 00:02:08.539 ++ SPDK_TEST_FTL=1 00:02:08.539 ++ SPDK_TEST_ISAL=1 00:02:08.539 ++ SPDK_RUN_ASAN=1 00:02:08.539 ++ SPDK_RUN_UBSAN=1 00:02:08.539 ++ SPDK_TEST_XNVME=1 00:02:08.539 ++ SPDK_TEST_NVME_FDP=1 00:02:08.539 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:08.539 ++ RUN_NIGHTLY=0 00:02:08.539 + cd /var/jenkins/workspace/nvme-vg-autotest 00:02:08.539 + nvme_files=() 00:02:08.539 + declare -A nvme_files 00:02:08.539 + backend_dir=/var/lib/libvirt/images/backends 00:02:08.539 + nvme_files['nvme.img']=5G 00:02:08.539 + nvme_files['nvme-cmb.img']=5G 00:02:08.539 + nvme_files['nvme-multi0.img']=4G 00:02:08.539 + nvme_files['nvme-multi1.img']=4G 00:02:08.539 + nvme_files['nvme-multi2.img']=4G 00:02:08.539 + nvme_files['nvme-openstack.img']=8G 00:02:08.539 + nvme_files['nvme-zns.img']=5G 00:02:08.539 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:08.539 + (( SPDK_TEST_FTL == 1 )) 00:02:08.539 + nvme_files["nvme-ftl.img"]=6G 00:02:08.539 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:08.539 + nvme_files["nvme-fdp.img"]=1G 00:02:08.539 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:08.539 + for nvme in "${!nvme_files[@]}" 00:02:08.539 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:02:08.539 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:08.539 + for nvme in "${!nvme_files[@]}" 00:02:08.539 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-ftl.img -s 6G 00:02:08.539 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:02:08.539 + for nvme in "${!nvme_files[@]}" 00:02:08.539 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:02:08.539 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:08.539 + for nvme in "${!nvme_files[@]}" 00:02:08.539 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:02:08.798 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:08.798 + for nvme in "${!nvme_files[@]}" 00:02:08.798 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:02:08.798 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:08.798 + for nvme in "${!nvme_files[@]}" 00:02:08.798 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:02:08.799 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:08.799 + for nvme in "${!nvme_files[@]}" 00:02:08.799 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:02:08.799 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:08.799 + for nvme in "${!nvme_files[@]}" 00:02:08.799 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-fdp.img -s 1G 00:02:08.799 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:02:08.799 + for nvme in "${!nvme_files[@]}" 00:02:08.799 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:02:09.058 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:09.058 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:02:09.058 + echo 'End stage prepare_nvme.sh' 00:02:09.058 End stage prepare_nvme.sh 00:02:09.069 [Pipeline] sh 00:02:09.352 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:09.353 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex4-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:02:09.353 00:02:09.353 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:02:09.353 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:02:09.353 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:02:09.353 HELP=0 00:02:09.353 DRY_RUN=0 00:02:09.353 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,/var/lib/libvirt/images/backends/ex4-nvme-fdp.img, 00:02:09.353 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:02:09.353 NVME_AUTO_CREATE=0 00:02:09.353 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,, 00:02:09.353 NVME_CMB=,,,, 00:02:09.353 NVME_PMR=,,,, 00:02:09.353 NVME_ZNS=,,,, 00:02:09.353 NVME_MS=true,,,, 00:02:09.353 NVME_FDP=,,,on, 00:02:09.353 SPDK_VAGRANT_DISTRO=fedora39 00:02:09.353 SPDK_VAGRANT_VMCPU=10 00:02:09.353 SPDK_VAGRANT_VMRAM=12288 00:02:09.353 SPDK_VAGRANT_PROVIDER=libvirt 00:02:09.353 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:09.353 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:09.353 SPDK_OPENSTACK_NETWORK=0 00:02:09.353 VAGRANT_PACKAGE_BOX=0 00:02:09.353 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:09.353 FORCE_DISTRO=true 00:02:09.353 VAGRANT_BOX_VERSION= 00:02:09.353 EXTRA_VAGRANTFILES= 00:02:09.353 NIC_MODEL=e1000 00:02:09.353 00:02:09.353 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:02:09.353 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:02:12.644 Bringing machine 'default' up with 'libvirt' provider... 00:02:13.581 ==> default: Creating image (snapshot of base box volume). 00:02:13.841 ==> default: Creating domain with the following settings... 00:02:13.841 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733866400_9518e8e3641613ba086d 00:02:13.841 ==> default: -- Domain type: kvm 00:02:13.841 ==> default: -- Cpus: 10 00:02:13.841 ==> default: -- Feature: acpi 00:02:13.841 ==> default: -- Feature: apic 00:02:13.841 ==> default: -- Feature: pae 00:02:13.841 ==> default: -- Memory: 12288M 00:02:13.841 ==> default: -- Memory Backing: hugepages: 00:02:13.841 ==> default: -- Management MAC: 00:02:13.841 ==> default: -- Loader: 00:02:13.841 ==> default: -- Nvram: 00:02:13.841 ==> default: -- Base box: spdk/fedora39 00:02:13.841 ==> default: -- Storage pool: default 00:02:13.841 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733866400_9518e8e3641613ba086d.img (20G) 00:02:13.841 ==> default: -- Volume Cache: default 00:02:13.841 ==> default: -- Kernel: 00:02:13.841 ==> default: -- Initrd: 00:02:13.841 ==> default: -- Graphics Type: vnc 00:02:13.841 ==> default: -- Graphics Port: -1 00:02:13.841 ==> default: -- Graphics IP: 127.0.0.1 00:02:13.841 ==> default: -- Graphics Password: Not defined 00:02:13.841 ==> default: -- Video Type: cirrus 00:02:13.841 ==> default: -- Video VRAM: 9216 00:02:13.841 ==> default: -- Sound Type: 00:02:13.841 ==> default: -- Keymap: en-us 00:02:13.841 ==> default: -- TPM Path: 00:02:13.841 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:13.841 ==> default: -- Command line args: 00:02:13.841 ==> default: -> value=-device, 00:02:13.841 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:13.841 ==> default: -> value=-drive, 00:02:13.841 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:02:13.841 ==> default: -> value=-device, 00:02:13.841 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:02:13.841 ==> default: -> value=-device, 00:02:13.841 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:13.841 ==> default: -> value=-drive, 00:02:13.841 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-1-drive0, 00:02:13.841 ==> default: -> value=-device, 00:02:13.841 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:13.841 ==> default: -> value=-device, 00:02:13.841 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:02:13.841 ==> default: -> value=-drive, 00:02:13.841 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:02:13.841 ==> default: -> value=-device, 00:02:13.841 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:13.841 ==> default: -> value=-drive, 00:02:13.841 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:02:13.841 ==> default: -> value=-device, 00:02:13.841 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:13.841 ==> default: -> value=-drive, 00:02:13.841 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:02:13.841 ==> default: -> value=-device, 00:02:13.841 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:13.841 ==> default: -> value=-device, 00:02:13.841 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:02:13.841 ==> default: -> value=-device, 00:02:13.841 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:02:13.841 ==> default: -> value=-drive, 00:02:13.841 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:02:13.841 ==> default: -> value=-device, 00:02:13.841 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:14.101 ==> default: Creating shared folders metadata... 00:02:14.360 ==> default: Starting domain. 00:02:16.896 ==> default: Waiting for domain to get an IP address... 00:02:34.992 ==> default: Waiting for SSH to become available... 00:02:34.992 ==> default: Configuring and enabling network interfaces... 00:02:40.264 default: SSH address: 192.168.121.250:22 00:02:40.264 default: SSH username: vagrant 00:02:40.264 default: SSH auth method: private key 00:02:43.551 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:53.528 ==> default: Mounting SSHFS shared folder... 00:02:54.904 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:54.904 ==> default: Checking Mount.. 00:02:56.808 ==> default: Folder Successfully Mounted! 00:02:56.808 ==> default: Running provisioner: file... 00:02:57.744 default: ~/.gitconfig => .gitconfig 00:02:58.003 00:02:58.003 SUCCESS! 00:02:58.003 00:02:58.003 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:58.003 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:58.003 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:58.003 00:02:58.012 [Pipeline] } 00:02:58.029 [Pipeline] // stage 00:02:58.038 [Pipeline] dir 00:02:58.039 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:58.041 [Pipeline] { 00:02:58.055 [Pipeline] catchError 00:02:58.056 [Pipeline] { 00:02:58.069 [Pipeline] sh 00:02:58.350 + vagrant ssh-config --host vagrant 00:02:58.350 + sed -ne /^Host/,$p 00:02:58.350 + tee ssh_conf 00:03:01.638 Host vagrant 00:03:01.638 HostName 192.168.121.250 00:03:01.638 User vagrant 00:03:01.638 Port 22 00:03:01.638 UserKnownHostsFile /dev/null 00:03:01.638 StrictHostKeyChecking no 00:03:01.638 PasswordAuthentication no 00:03:01.638 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:03:01.638 IdentitiesOnly yes 00:03:01.638 LogLevel FATAL 00:03:01.638 ForwardAgent yes 00:03:01.638 ForwardX11 yes 00:03:01.638 00:03:01.652 [Pipeline] withEnv 00:03:01.654 [Pipeline] { 00:03:01.668 [Pipeline] sh 00:03:01.951 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:03:01.951 source /etc/os-release 00:03:01.951 [[ -e /image.version ]] && img=$(< /image.version) 00:03:01.951 # Minimal, systemd-like check. 00:03:01.951 if [[ -e /.dockerenv ]]; then 00:03:01.951 # Clear garbage from the node's name: 00:03:01.951 # agt-er_autotest_547-896 -> autotest_547-896 00:03:01.951 # $HOSTNAME is the actual container id 00:03:01.951 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:01.951 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:03:01.951 # We can assume this is a mount from a host where container is running, 00:03:01.951 # so fetch its hostname to easily identify the target swarm worker. 00:03:01.951 container="$(< /etc/hostname) ($agent)" 00:03:01.951 else 00:03:01.951 # Fallback 00:03:01.951 container=$agent 00:03:01.951 fi 00:03:01.951 fi 00:03:01.951 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:01.951 00:03:02.221 [Pipeline] } 00:03:02.238 [Pipeline] // withEnv 00:03:02.246 [Pipeline] setCustomBuildProperty 00:03:02.263 [Pipeline] stage 00:03:02.266 [Pipeline] { (Tests) 00:03:02.283 [Pipeline] sh 00:03:02.565 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:02.838 [Pipeline] sh 00:03:03.118 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:03.391 [Pipeline] timeout 00:03:03.392 Timeout set to expire in 50 min 00:03:03.394 [Pipeline] { 00:03:03.411 [Pipeline] sh 00:03:03.690 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:03:04.255 HEAD is now at 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:03:04.267 [Pipeline] sh 00:03:04.577 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:03:04.848 [Pipeline] sh 00:03:05.130 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:05.404 [Pipeline] sh 00:03:05.683 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:03:05.942 ++ readlink -f spdk_repo 00:03:05.942 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:05.942 + [[ -n /home/vagrant/spdk_repo ]] 00:03:05.942 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:05.942 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:05.942 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:05.942 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:05.942 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:05.942 + [[ nvme-vg-autotest == pkgdep-* ]] 00:03:05.942 + cd /home/vagrant/spdk_repo 00:03:05.942 + source /etc/os-release 00:03:05.942 ++ NAME='Fedora Linux' 00:03:05.942 ++ VERSION='39 (Cloud Edition)' 00:03:05.942 ++ ID=fedora 00:03:05.942 ++ VERSION_ID=39 00:03:05.942 ++ VERSION_CODENAME= 00:03:05.942 ++ PLATFORM_ID=platform:f39 00:03:05.942 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:05.942 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:05.942 ++ LOGO=fedora-logo-icon 00:03:05.942 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:05.942 ++ HOME_URL=https://fedoraproject.org/ 00:03:05.943 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:05.943 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:05.943 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:05.943 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:05.943 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:05.943 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:05.943 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:05.943 ++ SUPPORT_END=2024-11-12 00:03:05.943 ++ VARIANT='Cloud Edition' 00:03:05.943 ++ VARIANT_ID=cloud 00:03:05.943 + uname -a 00:03:05.943 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:05.943 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:06.203 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:06.772 Hugepages 00:03:06.772 node hugesize free / total 00:03:06.772 node0 1048576kB 0 / 0 00:03:06.772 node0 2048kB 0 / 0 00:03:06.772 00:03:06.772 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:06.772 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:06.772 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:06.772 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:06.772 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:03:06.772 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:03:06.772 + rm -f /tmp/spdk-ld-path 00:03:06.772 + source autorun-spdk.conf 00:03:06.772 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:06.772 ++ SPDK_TEST_NVME=1 00:03:06.772 ++ SPDK_TEST_FTL=1 00:03:06.772 ++ SPDK_TEST_ISAL=1 00:03:06.772 ++ SPDK_RUN_ASAN=1 00:03:06.772 ++ SPDK_RUN_UBSAN=1 00:03:06.772 ++ SPDK_TEST_XNVME=1 00:03:06.772 ++ SPDK_TEST_NVME_FDP=1 00:03:06.772 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:06.772 ++ RUN_NIGHTLY=0 00:03:06.772 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:06.772 + [[ -n '' ]] 00:03:06.772 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:06.772 + for M in /var/spdk/build-*-manifest.txt 00:03:06.772 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:06.772 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:06.772 + for M in /var/spdk/build-*-manifest.txt 00:03:06.772 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:06.772 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:06.772 + for M in /var/spdk/build-*-manifest.txt 00:03:06.772 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:06.772 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:06.772 ++ uname 00:03:06.772 + [[ Linux == \L\i\n\u\x ]] 00:03:06.772 + sudo dmesg -T 00:03:07.031 + sudo dmesg --clear 00:03:07.031 + dmesg_pid=5249 00:03:07.031 + [[ Fedora Linux == FreeBSD ]] 00:03:07.031 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:07.031 + sudo dmesg -Tw 00:03:07.031 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:07.031 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:07.031 + [[ -x /usr/src/fio-static/fio ]] 00:03:07.031 + export FIO_BIN=/usr/src/fio-static/fio 00:03:07.031 + FIO_BIN=/usr/src/fio-static/fio 00:03:07.031 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:07.031 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:07.031 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:07.031 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:07.031 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:07.031 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:07.031 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:07.031 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:07.031 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:07.031 21:34:14 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:07.031 21:34:14 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:07.031 21:34:14 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:07.031 21:34:14 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:03:07.031 21:34:14 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:03:07.031 21:34:14 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:03:07.031 21:34:14 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:03:07.031 21:34:14 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:03:07.031 21:34:14 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:03:07.031 21:34:14 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:03:07.031 21:34:14 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:07.031 21:34:14 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:03:07.031 21:34:14 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:07.031 21:34:14 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:07.031 21:34:14 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:07.031 21:34:14 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:07.031 21:34:14 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:07.031 21:34:14 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:07.031 21:34:14 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:07.031 21:34:14 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:07.031 21:34:14 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:07.031 21:34:14 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:07.032 21:34:14 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:07.032 21:34:14 -- paths/export.sh@5 -- $ export PATH 00:03:07.032 21:34:14 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:07.032 21:34:14 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:07.032 21:34:14 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:07.032 21:34:14 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733866454.XXXXXX 00:03:07.032 21:34:14 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733866454.w5U5sn 00:03:07.032 21:34:14 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:07.032 21:34:14 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:03:07.032 21:34:14 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:03:07.032 21:34:14 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:07.032 21:34:14 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:07.032 21:34:14 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:07.032 21:34:14 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:07.032 21:34:14 -- common/autotest_common.sh@10 -- $ set +x 00:03:07.032 21:34:14 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:03:07.032 21:34:14 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:07.032 21:34:14 -- pm/common@17 -- $ local monitor 00:03:07.032 21:34:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.032 21:34:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.032 21:34:14 -- pm/common@25 -- $ sleep 1 00:03:07.032 21:34:14 -- pm/common@21 -- $ date +%s 00:03:07.032 21:34:14 -- pm/common@21 -- $ date +%s 00:03:07.032 21:34:14 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733866454 00:03:07.032 21:34:14 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733866454 00:03:07.291 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733866454_collect-cpu-load.pm.log 00:03:07.291 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733866454_collect-vmstat.pm.log 00:03:08.228 21:34:15 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:08.228 21:34:15 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:08.228 21:34:15 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:08.228 21:34:15 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:08.228 21:34:15 -- spdk/autobuild.sh@16 -- $ date -u 00:03:08.228 Tue Dec 10 09:34:15 PM UTC 2024 00:03:08.228 21:34:15 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:08.228 v25.01-pre-331-g2104eacf0 00:03:08.228 21:34:15 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:03:08.228 21:34:15 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:03:08.228 21:34:15 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:08.228 21:34:15 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:08.228 21:34:15 -- common/autotest_common.sh@10 -- $ set +x 00:03:08.228 ************************************ 00:03:08.228 START TEST asan 00:03:08.228 ************************************ 00:03:08.228 using asan 00:03:08.228 21:34:15 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:03:08.228 00:03:08.228 real 0m0.000s 00:03:08.228 user 0m0.000s 00:03:08.228 sys 0m0.000s 00:03:08.228 21:34:15 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:08.228 21:34:15 asan -- common/autotest_common.sh@10 -- $ set +x 00:03:08.228 ************************************ 00:03:08.228 END TEST asan 00:03:08.228 ************************************ 00:03:08.228 21:34:15 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:08.228 21:34:15 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:08.228 21:34:15 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:08.228 21:34:15 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:08.228 21:34:15 -- common/autotest_common.sh@10 -- $ set +x 00:03:08.228 ************************************ 00:03:08.228 START TEST ubsan 00:03:08.228 ************************************ 00:03:08.228 using ubsan 00:03:08.228 21:34:15 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:08.228 00:03:08.228 real 0m0.001s 00:03:08.228 user 0m0.001s 00:03:08.228 sys 0m0.000s 00:03:08.228 21:34:15 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:08.228 21:34:15 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:08.228 ************************************ 00:03:08.228 END TEST ubsan 00:03:08.228 ************************************ 00:03:08.228 21:34:15 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:08.228 21:34:15 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:08.228 21:34:15 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:08.228 21:34:15 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:08.228 21:34:15 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:08.228 21:34:15 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:08.228 21:34:15 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:08.228 21:34:15 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:08.228 21:34:15 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:03:08.487 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:08.487 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:09.055 Using 'verbs' RDMA provider 00:03:28.576 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:43.446 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:43.446 Creating mk/config.mk...done. 00:03:43.446 Creating mk/cc.flags.mk...done. 00:03:43.446 Type 'make' to build. 00:03:43.446 21:34:49 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:43.446 21:34:49 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:43.446 21:34:49 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:43.446 21:34:49 -- common/autotest_common.sh@10 -- $ set +x 00:03:43.446 ************************************ 00:03:43.446 START TEST make 00:03:43.446 ************************************ 00:03:43.446 21:34:49 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:43.446 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:43.446 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:43.446 meson setup builddir \ 00:03:43.446 -Dwith-libaio=enabled \ 00:03:43.446 -Dwith-liburing=enabled \ 00:03:43.446 -Dwith-libvfn=disabled \ 00:03:43.446 -Dwith-spdk=disabled \ 00:03:43.446 -Dexamples=false \ 00:03:43.446 -Dtests=false \ 00:03:43.446 -Dtools=false && \ 00:03:43.446 meson compile -C builddir && \ 00:03:43.446 cd -) 00:03:44.824 The Meson build system 00:03:44.824 Version: 1.5.0 00:03:44.824 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:44.824 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:44.824 Build type: native build 00:03:44.824 Project name: xnvme 00:03:44.824 Project version: 0.7.5 00:03:44.824 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:44.824 C linker for the host machine: cc ld.bfd 2.40-14 00:03:44.824 Host machine cpu family: x86_64 00:03:44.824 Host machine cpu: x86_64 00:03:44.824 Message: host_machine.system: linux 00:03:44.824 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:44.824 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:44.824 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:44.824 Run-time dependency threads found: YES 00:03:44.824 Has header "setupapi.h" : NO 00:03:44.824 Has header "linux/blkzoned.h" : YES 00:03:44.824 Has header "linux/blkzoned.h" : YES (cached) 00:03:44.824 Has header "libaio.h" : YES 00:03:44.824 Library aio found: YES 00:03:44.824 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:44.824 Run-time dependency liburing found: YES 2.2 00:03:44.824 Dependency libvfn skipped: feature with-libvfn disabled 00:03:44.824 Found CMake: /usr/bin/cmake (3.27.7) 00:03:44.824 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:03:44.824 Subproject spdk : skipped: feature with-spdk disabled 00:03:44.824 Run-time dependency appleframeworks found: NO (tried framework) 00:03:44.824 Run-time dependency appleframeworks found: NO (tried framework) 00:03:44.824 Library rt found: YES 00:03:44.824 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:44.824 Configuring xnvme_config.h using configuration 00:03:44.824 Configuring xnvme.spec using configuration 00:03:44.824 Run-time dependency bash-completion found: YES 2.11 00:03:44.824 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:44.824 Program cp found: YES (/usr/bin/cp) 00:03:44.824 Build targets in project: 3 00:03:44.824 00:03:44.824 xnvme 0.7.5 00:03:44.824 00:03:44.824 Subprojects 00:03:44.824 spdk : NO Feature 'with-spdk' disabled 00:03:44.824 00:03:44.824 User defined options 00:03:44.824 examples : false 00:03:44.824 tests : false 00:03:44.824 tools : false 00:03:44.824 with-libaio : enabled 00:03:44.824 with-liburing: enabled 00:03:44.824 with-libvfn : disabled 00:03:44.824 with-spdk : disabled 00:03:44.824 00:03:44.824 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:45.391 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:45.391 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:03:45.391 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:03:45.391 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:03:45.391 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:03:45.391 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:03:45.391 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:03:45.391 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:03:45.391 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:03:45.650 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:03:45.650 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:03:45.650 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:03:45.650 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:03:45.650 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:03:45.650 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:03:45.650 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:03:45.650 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:03:45.650 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:03:45.650 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:03:45.650 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:03:45.650 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:03:45.650 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:03:45.650 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:03:45.650 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:03:45.650 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:03:45.650 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:03:45.650 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:03:45.650 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:03:45.650 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:03:45.650 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:03:45.650 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:03:45.650 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:03:45.908 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:03:45.908 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:03:45.908 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:03:45.908 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:03:45.908 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:03:45.908 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:03:45.908 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:03:45.908 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:03:45.908 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:03:45.908 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:03:45.908 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:03:45.908 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:03:45.908 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:03:45.908 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:03:45.908 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:03:45.908 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:03:45.908 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:03:45.908 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:03:45.908 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:03:45.908 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:03:45.908 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:03:45.908 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:03:45.908 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:03:45.908 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:03:45.908 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:03:45.908 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:03:45.908 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:03:45.908 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:03:46.167 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:03:46.167 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:03:46.167 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:03:46.167 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:03:46.167 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:03:46.167 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:03:46.167 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:03:46.167 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:03:46.167 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:03:46.167 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:03:46.167 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:03:46.167 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:03:46.167 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:03:46.425 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:03:46.682 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:03:46.682 [75/76] Linking static target lib/libxnvme.a 00:03:46.682 [76/76] Linking target lib/libxnvme.so.0.7.5 00:03:46.682 INFO: autodetecting backend as ninja 00:03:46.682 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:46.939 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:56.964 The Meson build system 00:03:56.964 Version: 1.5.0 00:03:56.964 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:56.964 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:56.964 Build type: native build 00:03:56.965 Program cat found: YES (/usr/bin/cat) 00:03:56.965 Project name: DPDK 00:03:56.965 Project version: 24.03.0 00:03:56.965 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:56.965 C linker for the host machine: cc ld.bfd 2.40-14 00:03:56.965 Host machine cpu family: x86_64 00:03:56.965 Host machine cpu: x86_64 00:03:56.965 Message: ## Building in Developer Mode ## 00:03:56.965 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:56.965 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:56.965 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:56.965 Program python3 found: YES (/usr/bin/python3) 00:03:56.965 Program cat found: YES (/usr/bin/cat) 00:03:56.965 Compiler for C supports arguments -march=native: YES 00:03:56.965 Checking for size of "void *" : 8 00:03:56.965 Checking for size of "void *" : 8 (cached) 00:03:56.965 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:56.965 Library m found: YES 00:03:56.965 Library numa found: YES 00:03:56.965 Has header "numaif.h" : YES 00:03:56.965 Library fdt found: NO 00:03:56.965 Library execinfo found: NO 00:03:56.965 Has header "execinfo.h" : YES 00:03:56.965 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:56.965 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:56.965 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:56.965 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:56.965 Run-time dependency openssl found: YES 3.1.1 00:03:56.965 Run-time dependency libpcap found: YES 1.10.4 00:03:56.965 Has header "pcap.h" with dependency libpcap: YES 00:03:56.965 Compiler for C supports arguments -Wcast-qual: YES 00:03:56.965 Compiler for C supports arguments -Wdeprecated: YES 00:03:56.965 Compiler for C supports arguments -Wformat: YES 00:03:56.965 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:56.965 Compiler for C supports arguments -Wformat-security: NO 00:03:56.965 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:56.965 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:56.965 Compiler for C supports arguments -Wnested-externs: YES 00:03:56.965 Compiler for C supports arguments -Wold-style-definition: YES 00:03:56.965 Compiler for C supports arguments -Wpointer-arith: YES 00:03:56.965 Compiler for C supports arguments -Wsign-compare: YES 00:03:56.965 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:56.965 Compiler for C supports arguments -Wundef: YES 00:03:56.965 Compiler for C supports arguments -Wwrite-strings: YES 00:03:56.965 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:56.965 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:56.965 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:56.965 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:56.965 Program objdump found: YES (/usr/bin/objdump) 00:03:56.965 Compiler for C supports arguments -mavx512f: YES 00:03:56.965 Checking if "AVX512 checking" compiles: YES 00:03:56.965 Fetching value of define "__SSE4_2__" : 1 00:03:56.965 Fetching value of define "__AES__" : 1 00:03:56.965 Fetching value of define "__AVX__" : 1 00:03:56.965 Fetching value of define "__AVX2__" : 1 00:03:56.965 Fetching value of define "__AVX512BW__" : 1 00:03:56.965 Fetching value of define "__AVX512CD__" : 1 00:03:56.965 Fetching value of define "__AVX512DQ__" : 1 00:03:56.965 Fetching value of define "__AVX512F__" : 1 00:03:56.965 Fetching value of define "__AVX512VL__" : 1 00:03:56.965 Fetching value of define "__PCLMUL__" : 1 00:03:56.965 Fetching value of define "__RDRND__" : 1 00:03:56.965 Fetching value of define "__RDSEED__" : 1 00:03:56.965 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:56.965 Fetching value of define "__znver1__" : (undefined) 00:03:56.965 Fetching value of define "__znver2__" : (undefined) 00:03:56.965 Fetching value of define "__znver3__" : (undefined) 00:03:56.965 Fetching value of define "__znver4__" : (undefined) 00:03:56.965 Library asan found: YES 00:03:56.965 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:56.965 Message: lib/log: Defining dependency "log" 00:03:56.965 Message: lib/kvargs: Defining dependency "kvargs" 00:03:56.965 Message: lib/telemetry: Defining dependency "telemetry" 00:03:56.965 Library rt found: YES 00:03:56.965 Checking for function "getentropy" : NO 00:03:56.965 Message: lib/eal: Defining dependency "eal" 00:03:56.965 Message: lib/ring: Defining dependency "ring" 00:03:56.965 Message: lib/rcu: Defining dependency "rcu" 00:03:56.965 Message: lib/mempool: Defining dependency "mempool" 00:03:56.965 Message: lib/mbuf: Defining dependency "mbuf" 00:03:56.965 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:56.965 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:56.965 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:56.965 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:56.965 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:56.965 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:56.965 Compiler for C supports arguments -mpclmul: YES 00:03:56.965 Compiler for C supports arguments -maes: YES 00:03:56.965 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:56.965 Compiler for C supports arguments -mavx512bw: YES 00:03:56.965 Compiler for C supports arguments -mavx512dq: YES 00:03:56.965 Compiler for C supports arguments -mavx512vl: YES 00:03:56.965 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:56.965 Compiler for C supports arguments -mavx2: YES 00:03:56.965 Compiler for C supports arguments -mavx: YES 00:03:56.965 Message: lib/net: Defining dependency "net" 00:03:56.965 Message: lib/meter: Defining dependency "meter" 00:03:56.965 Message: lib/ethdev: Defining dependency "ethdev" 00:03:56.965 Message: lib/pci: Defining dependency "pci" 00:03:56.965 Message: lib/cmdline: Defining dependency "cmdline" 00:03:56.965 Message: lib/hash: Defining dependency "hash" 00:03:56.965 Message: lib/timer: Defining dependency "timer" 00:03:56.965 Message: lib/compressdev: Defining dependency "compressdev" 00:03:56.965 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:56.965 Message: lib/dmadev: Defining dependency "dmadev" 00:03:56.965 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:56.965 Message: lib/power: Defining dependency "power" 00:03:56.965 Message: lib/reorder: Defining dependency "reorder" 00:03:56.965 Message: lib/security: Defining dependency "security" 00:03:56.965 Has header "linux/userfaultfd.h" : YES 00:03:56.965 Has header "linux/vduse.h" : YES 00:03:56.965 Message: lib/vhost: Defining dependency "vhost" 00:03:56.965 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:56.965 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:56.965 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:56.965 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:56.965 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:56.965 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:56.965 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:56.965 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:56.965 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:56.965 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:56.965 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:56.965 Configuring doxy-api-html.conf using configuration 00:03:56.965 Configuring doxy-api-man.conf using configuration 00:03:56.965 Program mandb found: YES (/usr/bin/mandb) 00:03:56.965 Program sphinx-build found: NO 00:03:56.965 Configuring rte_build_config.h using configuration 00:03:56.965 Message: 00:03:56.965 ================= 00:03:56.965 Applications Enabled 00:03:56.965 ================= 00:03:56.965 00:03:56.965 apps: 00:03:56.965 00:03:56.965 00:03:56.965 Message: 00:03:56.965 ================= 00:03:56.965 Libraries Enabled 00:03:56.965 ================= 00:03:56.965 00:03:56.965 libs: 00:03:56.965 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:56.965 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:56.965 cryptodev, dmadev, power, reorder, security, vhost, 00:03:56.965 00:03:56.965 Message: 00:03:56.965 =============== 00:03:56.965 Drivers Enabled 00:03:56.965 =============== 00:03:56.965 00:03:56.965 common: 00:03:56.965 00:03:56.965 bus: 00:03:56.965 pci, vdev, 00:03:56.965 mempool: 00:03:56.965 ring, 00:03:56.965 dma: 00:03:56.965 00:03:56.965 net: 00:03:56.965 00:03:56.965 crypto: 00:03:56.965 00:03:56.965 compress: 00:03:56.965 00:03:56.965 vdpa: 00:03:56.965 00:03:56.965 00:03:56.965 Message: 00:03:56.965 ================= 00:03:56.965 Content Skipped 00:03:56.965 ================= 00:03:56.965 00:03:56.965 apps: 00:03:56.965 dumpcap: explicitly disabled via build config 00:03:56.965 graph: explicitly disabled via build config 00:03:56.965 pdump: explicitly disabled via build config 00:03:56.965 proc-info: explicitly disabled via build config 00:03:56.965 test-acl: explicitly disabled via build config 00:03:56.965 test-bbdev: explicitly disabled via build config 00:03:56.965 test-cmdline: explicitly disabled via build config 00:03:56.965 test-compress-perf: explicitly disabled via build config 00:03:56.965 test-crypto-perf: explicitly disabled via build config 00:03:56.965 test-dma-perf: explicitly disabled via build config 00:03:56.965 test-eventdev: explicitly disabled via build config 00:03:56.965 test-fib: explicitly disabled via build config 00:03:56.965 test-flow-perf: explicitly disabled via build config 00:03:56.965 test-gpudev: explicitly disabled via build config 00:03:56.965 test-mldev: explicitly disabled via build config 00:03:56.965 test-pipeline: explicitly disabled via build config 00:03:56.965 test-pmd: explicitly disabled via build config 00:03:56.965 test-regex: explicitly disabled via build config 00:03:56.965 test-sad: explicitly disabled via build config 00:03:56.965 test-security-perf: explicitly disabled via build config 00:03:56.965 00:03:56.965 libs: 00:03:56.965 argparse: explicitly disabled via build config 00:03:56.965 metrics: explicitly disabled via build config 00:03:56.965 acl: explicitly disabled via build config 00:03:56.965 bbdev: explicitly disabled via build config 00:03:56.965 bitratestats: explicitly disabled via build config 00:03:56.965 bpf: explicitly disabled via build config 00:03:56.966 cfgfile: explicitly disabled via build config 00:03:56.966 distributor: explicitly disabled via build config 00:03:56.966 efd: explicitly disabled via build config 00:03:56.966 eventdev: explicitly disabled via build config 00:03:56.966 dispatcher: explicitly disabled via build config 00:03:56.966 gpudev: explicitly disabled via build config 00:03:56.966 gro: explicitly disabled via build config 00:03:56.966 gso: explicitly disabled via build config 00:03:56.966 ip_frag: explicitly disabled via build config 00:03:56.966 jobstats: explicitly disabled via build config 00:03:56.966 latencystats: explicitly disabled via build config 00:03:56.966 lpm: explicitly disabled via build config 00:03:56.966 member: explicitly disabled via build config 00:03:56.966 pcapng: explicitly disabled via build config 00:03:56.966 rawdev: explicitly disabled via build config 00:03:56.966 regexdev: explicitly disabled via build config 00:03:56.966 mldev: explicitly disabled via build config 00:03:56.966 rib: explicitly disabled via build config 00:03:56.966 sched: explicitly disabled via build config 00:03:56.966 stack: explicitly disabled via build config 00:03:56.966 ipsec: explicitly disabled via build config 00:03:56.966 pdcp: explicitly disabled via build config 00:03:56.966 fib: explicitly disabled via build config 00:03:56.966 port: explicitly disabled via build config 00:03:56.966 pdump: explicitly disabled via build config 00:03:56.966 table: explicitly disabled via build config 00:03:56.966 pipeline: explicitly disabled via build config 00:03:56.966 graph: explicitly disabled via build config 00:03:56.966 node: explicitly disabled via build config 00:03:56.966 00:03:56.966 drivers: 00:03:56.966 common/cpt: not in enabled drivers build config 00:03:56.966 common/dpaax: not in enabled drivers build config 00:03:56.966 common/iavf: not in enabled drivers build config 00:03:56.966 common/idpf: not in enabled drivers build config 00:03:56.966 common/ionic: not in enabled drivers build config 00:03:56.966 common/mvep: not in enabled drivers build config 00:03:56.966 common/octeontx: not in enabled drivers build config 00:03:56.966 bus/auxiliary: not in enabled drivers build config 00:03:56.966 bus/cdx: not in enabled drivers build config 00:03:56.966 bus/dpaa: not in enabled drivers build config 00:03:56.966 bus/fslmc: not in enabled drivers build config 00:03:56.966 bus/ifpga: not in enabled drivers build config 00:03:56.966 bus/platform: not in enabled drivers build config 00:03:56.966 bus/uacce: not in enabled drivers build config 00:03:56.966 bus/vmbus: not in enabled drivers build config 00:03:56.966 common/cnxk: not in enabled drivers build config 00:03:56.966 common/mlx5: not in enabled drivers build config 00:03:56.966 common/nfp: not in enabled drivers build config 00:03:56.966 common/nitrox: not in enabled drivers build config 00:03:56.966 common/qat: not in enabled drivers build config 00:03:56.966 common/sfc_efx: not in enabled drivers build config 00:03:56.966 mempool/bucket: not in enabled drivers build config 00:03:56.966 mempool/cnxk: not in enabled drivers build config 00:03:56.966 mempool/dpaa: not in enabled drivers build config 00:03:56.966 mempool/dpaa2: not in enabled drivers build config 00:03:56.966 mempool/octeontx: not in enabled drivers build config 00:03:56.966 mempool/stack: not in enabled drivers build config 00:03:56.966 dma/cnxk: not in enabled drivers build config 00:03:56.966 dma/dpaa: not in enabled drivers build config 00:03:56.966 dma/dpaa2: not in enabled drivers build config 00:03:56.966 dma/hisilicon: not in enabled drivers build config 00:03:56.966 dma/idxd: not in enabled drivers build config 00:03:56.966 dma/ioat: not in enabled drivers build config 00:03:56.966 dma/skeleton: not in enabled drivers build config 00:03:56.966 net/af_packet: not in enabled drivers build config 00:03:56.966 net/af_xdp: not in enabled drivers build config 00:03:56.966 net/ark: not in enabled drivers build config 00:03:56.966 net/atlantic: not in enabled drivers build config 00:03:56.966 net/avp: not in enabled drivers build config 00:03:56.966 net/axgbe: not in enabled drivers build config 00:03:56.966 net/bnx2x: not in enabled drivers build config 00:03:56.966 net/bnxt: not in enabled drivers build config 00:03:56.966 net/bonding: not in enabled drivers build config 00:03:56.966 net/cnxk: not in enabled drivers build config 00:03:56.966 net/cpfl: not in enabled drivers build config 00:03:56.966 net/cxgbe: not in enabled drivers build config 00:03:56.966 net/dpaa: not in enabled drivers build config 00:03:56.966 net/dpaa2: not in enabled drivers build config 00:03:56.966 net/e1000: not in enabled drivers build config 00:03:56.966 net/ena: not in enabled drivers build config 00:03:56.966 net/enetc: not in enabled drivers build config 00:03:56.966 net/enetfec: not in enabled drivers build config 00:03:56.966 net/enic: not in enabled drivers build config 00:03:56.966 net/failsafe: not in enabled drivers build config 00:03:56.966 net/fm10k: not in enabled drivers build config 00:03:56.966 net/gve: not in enabled drivers build config 00:03:56.966 net/hinic: not in enabled drivers build config 00:03:56.966 net/hns3: not in enabled drivers build config 00:03:56.966 net/i40e: not in enabled drivers build config 00:03:56.966 net/iavf: not in enabled drivers build config 00:03:56.966 net/ice: not in enabled drivers build config 00:03:56.966 net/idpf: not in enabled drivers build config 00:03:56.966 net/igc: not in enabled drivers build config 00:03:56.966 net/ionic: not in enabled drivers build config 00:03:56.966 net/ipn3ke: not in enabled drivers build config 00:03:56.966 net/ixgbe: not in enabled drivers build config 00:03:56.966 net/mana: not in enabled drivers build config 00:03:56.966 net/memif: not in enabled drivers build config 00:03:56.966 net/mlx4: not in enabled drivers build config 00:03:56.966 net/mlx5: not in enabled drivers build config 00:03:56.966 net/mvneta: not in enabled drivers build config 00:03:56.966 net/mvpp2: not in enabled drivers build config 00:03:56.966 net/netvsc: not in enabled drivers build config 00:03:56.966 net/nfb: not in enabled drivers build config 00:03:56.966 net/nfp: not in enabled drivers build config 00:03:56.966 net/ngbe: not in enabled drivers build config 00:03:56.966 net/null: not in enabled drivers build config 00:03:56.966 net/octeontx: not in enabled drivers build config 00:03:56.966 net/octeon_ep: not in enabled drivers build config 00:03:56.966 net/pcap: not in enabled drivers build config 00:03:56.966 net/pfe: not in enabled drivers build config 00:03:56.966 net/qede: not in enabled drivers build config 00:03:56.966 net/ring: not in enabled drivers build config 00:03:56.966 net/sfc: not in enabled drivers build config 00:03:56.966 net/softnic: not in enabled drivers build config 00:03:56.966 net/tap: not in enabled drivers build config 00:03:56.966 net/thunderx: not in enabled drivers build config 00:03:56.966 net/txgbe: not in enabled drivers build config 00:03:56.966 net/vdev_netvsc: not in enabled drivers build config 00:03:56.966 net/vhost: not in enabled drivers build config 00:03:56.966 net/virtio: not in enabled drivers build config 00:03:56.966 net/vmxnet3: not in enabled drivers build config 00:03:56.966 raw/*: missing internal dependency, "rawdev" 00:03:56.966 crypto/armv8: not in enabled drivers build config 00:03:56.966 crypto/bcmfs: not in enabled drivers build config 00:03:56.966 crypto/caam_jr: not in enabled drivers build config 00:03:56.966 crypto/ccp: not in enabled drivers build config 00:03:56.966 crypto/cnxk: not in enabled drivers build config 00:03:56.966 crypto/dpaa_sec: not in enabled drivers build config 00:03:56.966 crypto/dpaa2_sec: not in enabled drivers build config 00:03:56.966 crypto/ipsec_mb: not in enabled drivers build config 00:03:56.966 crypto/mlx5: not in enabled drivers build config 00:03:56.966 crypto/mvsam: not in enabled drivers build config 00:03:56.966 crypto/nitrox: not in enabled drivers build config 00:03:56.966 crypto/null: not in enabled drivers build config 00:03:56.966 crypto/octeontx: not in enabled drivers build config 00:03:56.966 crypto/openssl: not in enabled drivers build config 00:03:56.966 crypto/scheduler: not in enabled drivers build config 00:03:56.966 crypto/uadk: not in enabled drivers build config 00:03:56.966 crypto/virtio: not in enabled drivers build config 00:03:56.966 compress/isal: not in enabled drivers build config 00:03:56.966 compress/mlx5: not in enabled drivers build config 00:03:56.966 compress/nitrox: not in enabled drivers build config 00:03:56.966 compress/octeontx: not in enabled drivers build config 00:03:56.966 compress/zlib: not in enabled drivers build config 00:03:56.966 regex/*: missing internal dependency, "regexdev" 00:03:56.966 ml/*: missing internal dependency, "mldev" 00:03:56.966 vdpa/ifc: not in enabled drivers build config 00:03:56.966 vdpa/mlx5: not in enabled drivers build config 00:03:56.966 vdpa/nfp: not in enabled drivers build config 00:03:56.966 vdpa/sfc: not in enabled drivers build config 00:03:56.966 event/*: missing internal dependency, "eventdev" 00:03:56.966 baseband/*: missing internal dependency, "bbdev" 00:03:56.966 gpu/*: missing internal dependency, "gpudev" 00:03:56.966 00:03:56.966 00:03:56.966 Build targets in project: 85 00:03:56.966 00:03:56.966 DPDK 24.03.0 00:03:56.966 00:03:56.966 User defined options 00:03:56.966 buildtype : debug 00:03:56.966 default_library : shared 00:03:56.966 libdir : lib 00:03:56.966 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:56.966 b_sanitize : address 00:03:56.966 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:56.966 c_link_args : 00:03:56.966 cpu_instruction_set: native 00:03:56.966 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:56.966 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:56.966 enable_docs : false 00:03:56.966 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:56.966 enable_kmods : false 00:03:56.966 max_lcores : 128 00:03:56.966 tests : false 00:03:56.967 00:03:56.967 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:56.967 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:56.967 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:56.967 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:57.227 [3/268] Linking static target lib/librte_kvargs.a 00:03:57.227 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:57.227 [5/268] Linking static target lib/librte_log.a 00:03:57.227 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:57.486 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:57.486 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:57.486 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:57.486 [10/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.745 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:57.745 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:57.745 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:57.745 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:57.745 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:57.745 [16/268] Linking static target lib/librte_telemetry.a 00:03:57.745 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:58.004 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:58.316 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:58.316 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:58.316 [21/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:58.316 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:58.316 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:58.316 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:58.316 [25/268] Linking target lib/librte_log.so.24.1 00:03:58.316 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:58.587 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:58.587 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:58.847 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:58.847 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:58.847 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:58.847 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:58.847 [33/268] Linking target lib/librte_kvargs.so.24.1 00:03:58.847 [34/268] Linking target lib/librte_telemetry.so.24.1 00:03:59.106 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:59.106 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:59.106 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:59.106 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:59.106 [39/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:59.366 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:59.366 [41/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:59.366 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:59.366 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:59.366 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:59.366 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:59.366 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:59.625 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:59.625 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:59.625 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:59.884 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:59.884 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:59.884 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:59.884 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:00.143 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:00.143 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:00.143 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:00.143 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:00.143 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:00.402 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:00.402 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:00.402 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:00.402 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:00.661 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:00.661 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:00.661 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:00.661 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:00.661 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:00.921 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:00.921 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:01.180 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:01.180 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:01.180 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:01.180 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:01.180 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:01.180 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:01.180 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:01.180 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:01.440 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:01.440 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:01.440 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:01.700 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:01.700 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:01.700 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:01.700 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:01.700 [85/268] Linking static target lib/librte_ring.a 00:04:01.700 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:01.958 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:01.958 [88/268] Linking static target lib/librte_eal.a 00:04:01.958 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:01.959 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:02.217 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:02.217 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:02.217 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:02.217 [94/268] Linking static target lib/librte_mempool.a 00:04:02.217 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:02.217 [96/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:02.475 [97/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:02.475 [98/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:02.475 [99/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:02.475 [100/268] Linking static target lib/librte_rcu.a 00:04:02.734 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:02.734 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:02.734 [103/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:02.734 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:02.734 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:02.734 [106/268] Linking static target lib/librte_mbuf.a 00:04:02.997 [107/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:02.997 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:02.997 [109/268] Linking static target lib/librte_meter.a 00:04:02.997 [110/268] Linking static target lib/librte_net.a 00:04:03.256 [111/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:03.256 [112/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:03.256 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:03.256 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:03.256 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:03.515 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:03.515 [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:03.515 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:03.774 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:04.033 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:04.033 [121/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:04.033 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:04.292 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:04.552 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:04.552 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:04.552 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:04.552 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:04.552 [128/268] Linking static target lib/librte_pci.a 00:04:04.552 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:04.552 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:04.552 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:04.552 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:04.812 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:04.812 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:04.812 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:04.812 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:04.812 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:05.072 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:05.072 [139/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:05.072 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:05.072 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:05.072 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:05.072 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:05.072 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:05.072 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:05.072 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:05.072 [147/268] Linking static target lib/librte_cmdline.a 00:04:05.411 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:05.669 [149/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:05.669 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:05.928 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:05.928 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:05.928 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:05.928 [154/268] Linking static target lib/librte_timer.a 00:04:05.928 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:06.186 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:06.445 [157/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:06.445 [158/268] Linking static target lib/librte_ethdev.a 00:04:06.445 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:06.705 [160/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:06.705 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:06.705 [162/268] Linking static target lib/librte_compressdev.a 00:04:06.705 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:06.705 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:06.705 [165/268] Linking static target lib/librte_hash.a 00:04:06.964 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:06.964 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:06.964 [168/268] Linking static target lib/librte_dmadev.a 00:04:06.964 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:07.223 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:07.223 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:07.223 [172/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.223 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:07.484 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:07.744 [175/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:07.744 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.744 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:08.003 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:08.003 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.262 [180/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.262 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:08.262 [182/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:08.262 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:08.521 [184/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:08.521 [185/268] Linking static target lib/librte_cryptodev.a 00:04:08.521 [186/268] Linking static target lib/librte_power.a 00:04:08.780 [187/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:08.780 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:08.780 [189/268] Linking static target lib/librte_security.a 00:04:09.040 [190/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:09.040 [191/268] Linking static target lib/librte_reorder.a 00:04:09.040 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:09.040 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:09.607 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:09.607 [195/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.607 [196/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.865 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.123 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:10.123 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:10.382 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:10.382 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:10.641 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:10.641 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:10.641 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:10.899 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:11.159 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:11.159 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:11.159 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:11.159 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:11.159 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:11.434 [211/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:11.434 [212/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:11.434 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:11.434 [214/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:11.434 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:11.434 [216/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:11.434 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:11.434 [218/268] Linking static target drivers/librte_bus_vdev.a 00:04:11.434 [219/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:11.434 [220/268] Linking static target drivers/librte_bus_pci.a 00:04:11.434 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:11.702 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:11.961 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:11.961 [224/268] Linking static target drivers/librte_mempool_ring.a 00:04:11.961 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:11.961 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:12.220 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.597 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:15.536 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:15.536 [230/268] Linking target lib/librte_eal.so.24.1 00:04:15.536 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:15.794 [232/268] Linking target lib/librte_timer.so.24.1 00:04:15.794 [233/268] Linking target lib/librte_pci.so.24.1 00:04:15.794 [234/268] Linking target lib/librte_ring.so.24.1 00:04:15.794 [235/268] Linking target lib/librte_meter.so.24.1 00:04:15.794 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:15.794 [237/268] Linking target lib/librte_dmadev.so.24.1 00:04:15.794 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:15.794 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:15.794 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:15.794 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:15.794 [242/268] Linking target lib/librte_mempool.so.24.1 00:04:15.794 [243/268] Linking target lib/librte_rcu.so.24.1 00:04:15.794 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:16.053 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:16.053 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:16.053 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:16.053 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:16.053 [249/268] Linking target lib/librte_mbuf.so.24.1 00:04:16.312 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:16.312 [251/268] Linking target lib/librte_compressdev.so.24.1 00:04:16.312 [252/268] Linking target lib/librte_reorder.so.24.1 00:04:16.312 [253/268] Linking target lib/librte_net.so.24.1 00:04:16.312 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:04:16.312 [255/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:16.570 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:16.570 [257/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:16.570 [258/268] Linking target lib/librte_security.so.24.1 00:04:16.570 [259/268] Linking target lib/librte_hash.so.24.1 00:04:16.570 [260/268] Linking target lib/librte_cmdline.so.24.1 00:04:16.570 [261/268] Linking target lib/librte_ethdev.so.24.1 00:04:16.570 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:16.828 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:16.828 [264/268] Linking target lib/librte_power.so.24.1 00:04:18.200 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:18.200 [266/268] Linking static target lib/librte_vhost.a 00:04:20.751 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.010 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:21.010 INFO: autodetecting backend as ninja 00:04:21.010 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:42.987 CC lib/log/log.o 00:04:42.987 CC lib/log/log_deprecated.o 00:04:42.987 CC lib/log/log_flags.o 00:04:42.987 CC lib/ut/ut.o 00:04:42.987 CC lib/ut_mock/mock.o 00:04:42.987 LIB libspdk_ut.a 00:04:42.987 LIB libspdk_log.a 00:04:42.987 LIB libspdk_ut_mock.a 00:04:42.987 SO libspdk_ut.so.2.0 00:04:42.987 SO libspdk_log.so.7.1 00:04:42.987 SO libspdk_ut_mock.so.6.0 00:04:42.987 SYMLINK libspdk_ut.so 00:04:42.987 SYMLINK libspdk_ut_mock.so 00:04:42.987 SYMLINK libspdk_log.so 00:04:42.987 CC lib/util/bit_array.o 00:04:42.987 CC lib/util/base64.o 00:04:42.987 CC lib/util/crc32.o 00:04:42.987 CC lib/util/crc16.o 00:04:42.987 CC lib/util/cpuset.o 00:04:42.987 CC lib/util/crc32c.o 00:04:42.987 CC lib/ioat/ioat.o 00:04:42.987 CXX lib/trace_parser/trace.o 00:04:42.987 CC lib/dma/dma.o 00:04:42.987 CC lib/vfio_user/host/vfio_user_pci.o 00:04:42.987 CC lib/util/crc32_ieee.o 00:04:42.987 CC lib/util/crc64.o 00:04:42.987 CC lib/util/dif.o 00:04:42.987 CC lib/util/fd.o 00:04:42.987 CC lib/util/fd_group.o 00:04:42.987 LIB libspdk_dma.a 00:04:42.987 CC lib/vfio_user/host/vfio_user.o 00:04:42.987 SO libspdk_dma.so.5.0 00:04:42.987 CC lib/util/file.o 00:04:42.987 CC lib/util/hexlify.o 00:04:42.987 LIB libspdk_ioat.a 00:04:42.987 SYMLINK libspdk_dma.so 00:04:42.987 CC lib/util/iov.o 00:04:42.987 SO libspdk_ioat.so.7.0 00:04:42.987 CC lib/util/math.o 00:04:42.987 CC lib/util/net.o 00:04:42.987 SYMLINK libspdk_ioat.so 00:04:42.987 CC lib/util/pipe.o 00:04:42.987 CC lib/util/strerror_tls.o 00:04:42.987 CC lib/util/string.o 00:04:42.987 LIB libspdk_vfio_user.a 00:04:42.987 SO libspdk_vfio_user.so.5.0 00:04:42.987 CC lib/util/uuid.o 00:04:42.987 CC lib/util/xor.o 00:04:42.987 CC lib/util/zipf.o 00:04:42.987 SYMLINK libspdk_vfio_user.so 00:04:42.987 CC lib/util/md5.o 00:04:42.987 LIB libspdk_util.a 00:04:42.987 SO libspdk_util.so.10.1 00:04:42.987 LIB libspdk_trace_parser.a 00:04:42.987 SO libspdk_trace_parser.so.6.0 00:04:42.987 SYMLINK libspdk_util.so 00:04:42.987 SYMLINK libspdk_trace_parser.so 00:04:42.987 CC lib/rdma_utils/rdma_utils.o 00:04:42.987 CC lib/conf/conf.o 00:04:42.987 CC lib/json/json_parse.o 00:04:42.987 CC lib/json/json_util.o 00:04:42.987 CC lib/json/json_write.o 00:04:42.987 CC lib/idxd/idxd.o 00:04:42.987 CC lib/idxd/idxd_kernel.o 00:04:42.987 CC lib/idxd/idxd_user.o 00:04:42.987 CC lib/env_dpdk/env.o 00:04:42.987 CC lib/vmd/vmd.o 00:04:42.987 CC lib/vmd/led.o 00:04:42.987 LIB libspdk_conf.a 00:04:42.987 SO libspdk_conf.so.6.0 00:04:42.987 CC lib/env_dpdk/memory.o 00:04:43.247 CC lib/env_dpdk/pci.o 00:04:43.247 CC lib/env_dpdk/init.o 00:04:43.247 SYMLINK libspdk_conf.so 00:04:43.247 CC lib/env_dpdk/threads.o 00:04:43.247 LIB libspdk_rdma_utils.a 00:04:43.247 CC lib/env_dpdk/pci_ioat.o 00:04:43.247 SO libspdk_rdma_utils.so.1.0 00:04:43.247 LIB libspdk_json.a 00:04:43.247 SO libspdk_json.so.6.0 00:04:43.247 CC lib/env_dpdk/pci_virtio.o 00:04:43.247 SYMLINK libspdk_rdma_utils.so 00:04:43.247 SYMLINK libspdk_json.so 00:04:43.247 CC lib/env_dpdk/pci_vmd.o 00:04:43.247 CC lib/env_dpdk/pci_idxd.o 00:04:43.247 CC lib/env_dpdk/pci_event.o 00:04:43.513 CC lib/env_dpdk/sigbus_handler.o 00:04:43.513 CC lib/env_dpdk/pci_dpdk.o 00:04:43.513 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:43.513 LIB libspdk_idxd.a 00:04:43.774 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:43.774 LIB libspdk_vmd.a 00:04:43.774 SO libspdk_idxd.so.12.1 00:04:43.774 CC lib/rdma_provider/common.o 00:04:43.774 SO libspdk_vmd.so.6.0 00:04:43.774 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:43.774 CC lib/jsonrpc/jsonrpc_server.o 00:04:43.774 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:43.774 SYMLINK libspdk_idxd.so 00:04:43.774 CC lib/jsonrpc/jsonrpc_client.o 00:04:43.774 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:43.774 SYMLINK libspdk_vmd.so 00:04:44.032 LIB libspdk_rdma_provider.a 00:04:44.032 SO libspdk_rdma_provider.so.7.0 00:04:44.032 LIB libspdk_jsonrpc.a 00:04:44.032 SYMLINK libspdk_rdma_provider.so 00:04:44.032 SO libspdk_jsonrpc.so.6.0 00:04:44.291 SYMLINK libspdk_jsonrpc.so 00:04:44.857 CC lib/rpc/rpc.o 00:04:44.857 LIB libspdk_env_dpdk.a 00:04:44.857 SO libspdk_env_dpdk.so.15.1 00:04:44.857 LIB libspdk_rpc.a 00:04:44.857 SO libspdk_rpc.so.6.0 00:04:45.116 SYMLINK libspdk_env_dpdk.so 00:04:45.116 SYMLINK libspdk_rpc.so 00:04:45.404 CC lib/trace/trace.o 00:04:45.404 CC lib/notify/notify.o 00:04:45.404 CC lib/trace/trace_flags.o 00:04:45.404 CC lib/notify/notify_rpc.o 00:04:45.404 CC lib/trace/trace_rpc.o 00:04:45.404 CC lib/keyring/keyring.o 00:04:45.404 CC lib/keyring/keyring_rpc.o 00:04:45.687 LIB libspdk_notify.a 00:04:45.687 SO libspdk_notify.so.6.0 00:04:45.687 LIB libspdk_keyring.a 00:04:45.687 SYMLINK libspdk_notify.so 00:04:45.687 LIB libspdk_trace.a 00:04:45.946 SO libspdk_keyring.so.2.0 00:04:45.946 SO libspdk_trace.so.11.0 00:04:45.946 SYMLINK libspdk_keyring.so 00:04:45.946 SYMLINK libspdk_trace.so 00:04:46.513 CC lib/thread/thread.o 00:04:46.513 CC lib/thread/iobuf.o 00:04:46.513 CC lib/sock/sock.o 00:04:46.513 CC lib/sock/sock_rpc.o 00:04:47.080 LIB libspdk_sock.a 00:04:47.080 SO libspdk_sock.so.10.0 00:04:47.080 SYMLINK libspdk_sock.so 00:04:47.647 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:47.647 CC lib/nvme/nvme_ctrlr.o 00:04:47.647 CC lib/nvme/nvme_fabric.o 00:04:47.647 CC lib/nvme/nvme_ns_cmd.o 00:04:47.647 CC lib/nvme/nvme_ns.o 00:04:47.647 CC lib/nvme/nvme_pcie_common.o 00:04:47.647 CC lib/nvme/nvme_pcie.o 00:04:47.647 CC lib/nvme/nvme_qpair.o 00:04:47.647 CC lib/nvme/nvme.o 00:04:48.216 LIB libspdk_thread.a 00:04:48.216 CC lib/nvme/nvme_quirks.o 00:04:48.509 SO libspdk_thread.so.11.0 00:04:48.509 CC lib/nvme/nvme_transport.o 00:04:48.509 SYMLINK libspdk_thread.so 00:04:48.509 CC lib/nvme/nvme_discovery.o 00:04:48.509 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:48.509 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:48.509 CC lib/nvme/nvme_tcp.o 00:04:48.776 CC lib/nvme/nvme_opal.o 00:04:48.776 CC lib/nvme/nvme_io_msg.o 00:04:48.776 CC lib/nvme/nvme_poll_group.o 00:04:49.034 CC lib/nvme/nvme_zns.o 00:04:49.034 CC lib/nvme/nvme_stubs.o 00:04:49.293 CC lib/accel/accel.o 00:04:49.293 CC lib/accel/accel_rpc.o 00:04:49.293 CC lib/blob/blobstore.o 00:04:49.293 CC lib/nvme/nvme_auth.o 00:04:49.552 CC lib/nvme/nvme_cuse.o 00:04:49.552 CC lib/blob/request.o 00:04:49.552 CC lib/blob/zeroes.o 00:04:49.552 CC lib/blob/blob_bs_dev.o 00:04:50.118 CC lib/accel/accel_sw.o 00:04:50.118 CC lib/init/json_config.o 00:04:50.118 CC lib/virtio/virtio.o 00:04:50.118 CC lib/fsdev/fsdev.o 00:04:50.376 CC lib/init/subsystem.o 00:04:50.376 CC lib/fsdev/fsdev_io.o 00:04:50.376 CC lib/fsdev/fsdev_rpc.o 00:04:50.376 CC lib/virtio/virtio_vhost_user.o 00:04:50.376 CC lib/virtio/virtio_vfio_user.o 00:04:50.633 CC lib/virtio/virtio_pci.o 00:04:50.633 CC lib/nvme/nvme_rdma.o 00:04:50.633 CC lib/init/subsystem_rpc.o 00:04:50.634 LIB libspdk_accel.a 00:04:50.634 SO libspdk_accel.so.16.0 00:04:50.634 CC lib/init/rpc.o 00:04:50.634 SYMLINK libspdk_accel.so 00:04:50.891 LIB libspdk_fsdev.a 00:04:50.891 SO libspdk_fsdev.so.2.0 00:04:50.891 LIB libspdk_virtio.a 00:04:50.891 SYMLINK libspdk_fsdev.so 00:04:50.891 LIB libspdk_init.a 00:04:50.891 SO libspdk_virtio.so.7.0 00:04:50.891 SO libspdk_init.so.6.0 00:04:51.149 CC lib/bdev/bdev.o 00:04:51.149 CC lib/bdev/bdev_rpc.o 00:04:51.149 CC lib/bdev/bdev_zone.o 00:04:51.149 CC lib/bdev/scsi_nvme.o 00:04:51.149 CC lib/bdev/part.o 00:04:51.149 SYMLINK libspdk_virtio.so 00:04:51.149 SYMLINK libspdk_init.so 00:04:51.149 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:51.408 CC lib/event/app.o 00:04:51.408 CC lib/event/reactor.o 00:04:51.408 CC lib/event/log_rpc.o 00:04:51.408 CC lib/event/app_rpc.o 00:04:51.408 CC lib/event/scheduler_static.o 00:04:52.019 LIB libspdk_fuse_dispatcher.a 00:04:52.019 SO libspdk_fuse_dispatcher.so.1.0 00:04:52.019 LIB libspdk_event.a 00:04:52.019 SYMLINK libspdk_fuse_dispatcher.so 00:04:52.019 SO libspdk_event.so.14.0 00:04:52.019 SYMLINK libspdk_event.so 00:04:52.277 LIB libspdk_nvme.a 00:04:52.535 SO libspdk_nvme.so.15.0 00:04:52.794 SYMLINK libspdk_nvme.so 00:04:53.729 LIB libspdk_blob.a 00:04:53.729 SO libspdk_blob.so.12.0 00:04:53.987 SYMLINK libspdk_blob.so 00:04:54.247 CC lib/lvol/lvol.o 00:04:54.247 CC lib/blobfs/blobfs.o 00:04:54.247 CC lib/blobfs/tree.o 00:04:54.247 LIB libspdk_bdev.a 00:04:54.506 SO libspdk_bdev.so.17.0 00:04:54.506 SYMLINK libspdk_bdev.so 00:04:54.766 CC lib/nbd/nbd_rpc.o 00:04:54.766 CC lib/nbd/nbd.o 00:04:54.766 CC lib/ublk/ublk_rpc.o 00:04:54.766 CC lib/ublk/ublk.o 00:04:54.766 CC lib/ftl/ftl_core.o 00:04:54.766 CC lib/scsi/dev.o 00:04:54.766 CC lib/scsi/lun.o 00:04:55.036 CC lib/nvmf/ctrlr.o 00:04:55.036 CC lib/nvmf/ctrlr_discovery.o 00:04:55.036 CC lib/nvmf/ctrlr_bdev.o 00:04:55.036 CC lib/scsi/port.o 00:04:55.312 CC lib/ftl/ftl_init.o 00:04:55.312 LIB libspdk_blobfs.a 00:04:55.312 CC lib/scsi/scsi.o 00:04:55.312 SO libspdk_blobfs.so.11.0 00:04:55.312 LIB libspdk_nbd.a 00:04:55.312 CC lib/ftl/ftl_layout.o 00:04:55.312 SO libspdk_nbd.so.7.0 00:04:55.312 LIB libspdk_lvol.a 00:04:55.571 SO libspdk_lvol.so.11.0 00:04:55.571 SYMLINK libspdk_blobfs.so 00:04:55.571 SYMLINK libspdk_nbd.so 00:04:55.571 CC lib/scsi/scsi_bdev.o 00:04:55.571 CC lib/nvmf/subsystem.o 00:04:55.571 CC lib/scsi/scsi_pr.o 00:04:55.571 CC lib/nvmf/nvmf.o 00:04:55.571 SYMLINK libspdk_lvol.so 00:04:55.571 CC lib/nvmf/nvmf_rpc.o 00:04:55.571 CC lib/nvmf/transport.o 00:04:55.830 LIB libspdk_ublk.a 00:04:55.830 CC lib/ftl/ftl_debug.o 00:04:55.830 SO libspdk_ublk.so.3.0 00:04:55.830 SYMLINK libspdk_ublk.so 00:04:55.830 CC lib/ftl/ftl_io.o 00:04:55.830 CC lib/scsi/scsi_rpc.o 00:04:55.830 CC lib/nvmf/tcp.o 00:04:56.089 CC lib/nvmf/stubs.o 00:04:56.089 CC lib/scsi/task.o 00:04:56.089 CC lib/ftl/ftl_sb.o 00:04:56.089 CC lib/ftl/ftl_l2p.o 00:04:56.348 CC lib/ftl/ftl_l2p_flat.o 00:04:56.348 LIB libspdk_scsi.a 00:04:56.348 CC lib/ftl/ftl_nv_cache.o 00:04:56.348 SO libspdk_scsi.so.9.0 00:04:56.348 CC lib/ftl/ftl_band.o 00:04:56.348 SYMLINK libspdk_scsi.so 00:04:56.607 CC lib/ftl/ftl_band_ops.o 00:04:56.607 CC lib/nvmf/mdns_server.o 00:04:56.607 CC lib/nvmf/rdma.o 00:04:56.607 CC lib/vhost/vhost.o 00:04:56.607 CC lib/iscsi/conn.o 00:04:56.866 CC lib/vhost/vhost_rpc.o 00:04:56.866 CC lib/vhost/vhost_scsi.o 00:04:57.125 CC lib/nvmf/auth.o 00:04:57.125 CC lib/iscsi/init_grp.o 00:04:57.125 CC lib/iscsi/iscsi.o 00:04:57.384 CC lib/iscsi/param.o 00:04:57.384 CC lib/iscsi/portal_grp.o 00:04:57.643 CC lib/ftl/ftl_writer.o 00:04:57.643 CC lib/vhost/vhost_blk.o 00:04:57.643 CC lib/iscsi/tgt_node.o 00:04:57.643 CC lib/iscsi/iscsi_subsystem.o 00:04:57.901 CC lib/iscsi/iscsi_rpc.o 00:04:57.901 CC lib/iscsi/task.o 00:04:57.901 CC lib/ftl/ftl_rq.o 00:04:57.901 CC lib/vhost/rte_vhost_user.o 00:04:58.159 CC lib/ftl/ftl_reloc.o 00:04:58.159 CC lib/ftl/ftl_l2p_cache.o 00:04:58.159 CC lib/ftl/ftl_p2l.o 00:04:58.159 CC lib/ftl/ftl_p2l_log.o 00:04:58.159 CC lib/ftl/mngt/ftl_mngt.o 00:04:58.418 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:58.418 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:58.418 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:58.677 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:58.677 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:58.677 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:58.677 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:58.677 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:58.677 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:58.677 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:58.677 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:58.677 LIB libspdk_iscsi.a 00:04:58.970 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:58.970 CC lib/ftl/utils/ftl_conf.o 00:04:58.970 SO libspdk_iscsi.so.8.0 00:04:58.970 CC lib/ftl/utils/ftl_md.o 00:04:58.970 CC lib/ftl/utils/ftl_mempool.o 00:04:58.970 CC lib/ftl/utils/ftl_bitmap.o 00:04:58.970 CC lib/ftl/utils/ftl_property.o 00:04:58.970 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:58.970 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:58.970 LIB libspdk_vhost.a 00:04:58.970 SYMLINK libspdk_iscsi.so 00:04:58.970 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:59.251 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:59.251 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:59.251 LIB libspdk_nvmf.a 00:04:59.251 SO libspdk_vhost.so.8.0 00:04:59.251 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:59.251 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:59.251 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:59.251 SYMLINK libspdk_vhost.so 00:04:59.251 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:59.251 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:59.251 SO libspdk_nvmf.so.20.0 00:04:59.251 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:59.251 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:59.251 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:59.510 CC lib/ftl/base/ftl_base_dev.o 00:04:59.510 CC lib/ftl/base/ftl_base_bdev.o 00:04:59.510 CC lib/ftl/ftl_trace.o 00:04:59.510 SYMLINK libspdk_nvmf.so 00:04:59.768 LIB libspdk_ftl.a 00:05:00.027 SO libspdk_ftl.so.9.0 00:05:00.286 SYMLINK libspdk_ftl.so 00:05:00.854 CC module/env_dpdk/env_dpdk_rpc.o 00:05:00.854 CC module/blob/bdev/blob_bdev.o 00:05:00.854 CC module/keyring/file/keyring.o 00:05:00.854 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:00.854 CC module/sock/posix/posix.o 00:05:00.854 CC module/scheduler/gscheduler/gscheduler.o 00:05:00.854 CC module/accel/error/accel_error.o 00:05:00.854 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:00.854 CC module/accel/ioat/accel_ioat.o 00:05:00.854 CC module/fsdev/aio/fsdev_aio.o 00:05:00.854 LIB libspdk_env_dpdk_rpc.a 00:05:00.854 SO libspdk_env_dpdk_rpc.so.6.0 00:05:01.113 CC module/keyring/file/keyring_rpc.o 00:05:01.113 LIB libspdk_scheduler_gscheduler.a 00:05:01.113 LIB libspdk_scheduler_dpdk_governor.a 00:05:01.113 SYMLINK libspdk_env_dpdk_rpc.so 00:05:01.113 CC module/accel/ioat/accel_ioat_rpc.o 00:05:01.113 SO libspdk_scheduler_gscheduler.so.4.0 00:05:01.113 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:01.113 LIB libspdk_scheduler_dynamic.a 00:05:01.113 CC module/accel/error/accel_error_rpc.o 00:05:01.113 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:01.113 SO libspdk_scheduler_dynamic.so.4.0 00:05:01.113 SYMLINK libspdk_scheduler_gscheduler.so 00:05:01.113 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:01.113 CC module/fsdev/aio/linux_aio_mgr.o 00:05:01.113 LIB libspdk_keyring_file.a 00:05:01.113 LIB libspdk_blob_bdev.a 00:05:01.113 SYMLINK libspdk_scheduler_dynamic.so 00:05:01.113 SO libspdk_blob_bdev.so.12.0 00:05:01.113 SO libspdk_keyring_file.so.2.0 00:05:01.113 LIB libspdk_accel_ioat.a 00:05:01.113 LIB libspdk_accel_error.a 00:05:01.113 SO libspdk_accel_ioat.so.6.0 00:05:01.372 SYMLINK libspdk_blob_bdev.so 00:05:01.372 SO libspdk_accel_error.so.2.0 00:05:01.372 SYMLINK libspdk_keyring_file.so 00:05:01.372 SYMLINK libspdk_accel_ioat.so 00:05:01.372 CC module/keyring/linux/keyring.o 00:05:01.372 CC module/keyring/linux/keyring_rpc.o 00:05:01.372 SYMLINK libspdk_accel_error.so 00:05:01.372 CC module/accel/dsa/accel_dsa.o 00:05:01.372 CC module/accel/dsa/accel_dsa_rpc.o 00:05:01.372 LIB libspdk_keyring_linux.a 00:05:01.372 CC module/accel/iaa/accel_iaa.o 00:05:01.372 SO libspdk_keyring_linux.so.1.0 00:05:01.631 CC module/accel/iaa/accel_iaa_rpc.o 00:05:01.631 CC module/bdev/error/vbdev_error.o 00:05:01.631 CC module/bdev/delay/vbdev_delay.o 00:05:01.631 SYMLINK libspdk_keyring_linux.so 00:05:01.631 CC module/blobfs/bdev/blobfs_bdev.o 00:05:01.631 CC module/bdev/error/vbdev_error_rpc.o 00:05:01.631 CC module/bdev/gpt/gpt.o 00:05:01.631 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:01.631 LIB libspdk_fsdev_aio.a 00:05:01.631 LIB libspdk_accel_dsa.a 00:05:01.631 LIB libspdk_accel_iaa.a 00:05:01.631 LIB libspdk_sock_posix.a 00:05:01.631 SO libspdk_fsdev_aio.so.1.0 00:05:01.631 SO libspdk_accel_iaa.so.3.0 00:05:01.631 SO libspdk_accel_dsa.so.5.0 00:05:01.631 SO libspdk_sock_posix.so.6.0 00:05:01.631 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:01.890 CC module/bdev/gpt/vbdev_gpt.o 00:05:01.890 SYMLINK libspdk_accel_iaa.so 00:05:01.890 SYMLINK libspdk_accel_dsa.so 00:05:01.890 SYMLINK libspdk_fsdev_aio.so 00:05:01.890 SYMLINK libspdk_sock_posix.so 00:05:01.890 LIB libspdk_bdev_error.a 00:05:01.890 SO libspdk_bdev_error.so.6.0 00:05:01.890 SYMLINK libspdk_bdev_error.so 00:05:01.890 LIB libspdk_bdev_delay.a 00:05:01.890 LIB libspdk_blobfs_bdev.a 00:05:01.890 CC module/bdev/malloc/bdev_malloc.o 00:05:01.890 SO libspdk_bdev_delay.so.6.0 00:05:01.890 CC module/bdev/lvol/vbdev_lvol.o 00:05:01.890 CC module/bdev/null/bdev_null.o 00:05:01.890 SO libspdk_blobfs_bdev.so.6.0 00:05:01.890 CC module/bdev/nvme/bdev_nvme.o 00:05:01.890 CC module/bdev/passthru/vbdev_passthru.o 00:05:02.149 CC module/bdev/raid/bdev_raid.o 00:05:02.149 SYMLINK libspdk_blobfs_bdev.so 00:05:02.149 SYMLINK libspdk_bdev_delay.so 00:05:02.149 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:02.149 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:02.149 LIB libspdk_bdev_gpt.a 00:05:02.149 SO libspdk_bdev_gpt.so.6.0 00:05:02.149 CC module/bdev/split/vbdev_split.o 00:05:02.149 SYMLINK libspdk_bdev_gpt.so 00:05:02.149 CC module/bdev/nvme/nvme_rpc.o 00:05:02.149 CC module/bdev/nvme/bdev_mdns_client.o 00:05:02.407 CC module/bdev/null/bdev_null_rpc.o 00:05:02.407 LIB libspdk_bdev_passthru.a 00:05:02.407 SO libspdk_bdev_passthru.so.6.0 00:05:02.407 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:02.407 CC module/bdev/split/vbdev_split_rpc.o 00:05:02.407 CC module/bdev/raid/bdev_raid_rpc.o 00:05:02.407 SYMLINK libspdk_bdev_passthru.so 00:05:02.407 CC module/bdev/raid/bdev_raid_sb.o 00:05:02.407 CC module/bdev/raid/raid0.o 00:05:02.407 LIB libspdk_bdev_null.a 00:05:02.666 SO libspdk_bdev_null.so.6.0 00:05:02.666 LIB libspdk_bdev_malloc.a 00:05:02.666 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:02.666 LIB libspdk_bdev_split.a 00:05:02.666 SO libspdk_bdev_malloc.so.6.0 00:05:02.666 SYMLINK libspdk_bdev_null.so 00:05:02.666 SO libspdk_bdev_split.so.6.0 00:05:02.666 CC module/bdev/raid/raid1.o 00:05:02.666 SYMLINK libspdk_bdev_malloc.so 00:05:02.666 SYMLINK libspdk_bdev_split.so 00:05:02.666 CC module/bdev/raid/concat.o 00:05:02.925 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:02.925 CC module/bdev/xnvme/bdev_xnvme.o 00:05:02.925 CC module/bdev/aio/bdev_aio.o 00:05:02.925 LIB libspdk_bdev_lvol.a 00:05:02.925 CC module/bdev/ftl/bdev_ftl.o 00:05:02.925 CC module/bdev/iscsi/bdev_iscsi.o 00:05:02.925 CC module/bdev/aio/bdev_aio_rpc.o 00:05:02.925 SO libspdk_bdev_lvol.so.6.0 00:05:03.184 CC module/bdev/nvme/vbdev_opal.o 00:05:03.184 SYMLINK libspdk_bdev_lvol.so 00:05:03.184 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:03.184 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:03.184 LIB libspdk_bdev_raid.a 00:05:03.184 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:05:03.184 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:03.184 SO libspdk_bdev_raid.so.6.0 00:05:03.184 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:03.444 LIB libspdk_bdev_aio.a 00:05:03.444 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:03.444 SO libspdk_bdev_aio.so.6.0 00:05:03.444 SYMLINK libspdk_bdev_raid.so 00:05:03.444 LIB libspdk_bdev_xnvme.a 00:05:03.444 LIB libspdk_bdev_zone_block.a 00:05:03.444 SO libspdk_bdev_xnvme.so.3.0 00:05:03.444 SYMLINK libspdk_bdev_aio.so 00:05:03.444 SO libspdk_bdev_zone_block.so.6.0 00:05:03.444 SYMLINK libspdk_bdev_xnvme.so 00:05:03.444 LIB libspdk_bdev_ftl.a 00:05:03.444 SYMLINK libspdk_bdev_zone_block.so 00:05:03.444 LIB libspdk_bdev_iscsi.a 00:05:03.444 SO libspdk_bdev_ftl.so.6.0 00:05:03.834 SO libspdk_bdev_iscsi.so.6.0 00:05:03.834 SYMLINK libspdk_bdev_ftl.so 00:05:03.834 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:03.834 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:03.834 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:03.834 SYMLINK libspdk_bdev_iscsi.so 00:05:04.106 LIB libspdk_bdev_virtio.a 00:05:04.365 SO libspdk_bdev_virtio.so.6.0 00:05:04.365 SYMLINK libspdk_bdev_virtio.so 00:05:04.932 LIB libspdk_bdev_nvme.a 00:05:04.932 SO libspdk_bdev_nvme.so.7.1 00:05:05.190 SYMLINK libspdk_bdev_nvme.so 00:05:05.757 CC module/event/subsystems/keyring/keyring.o 00:05:05.757 CC module/event/subsystems/iobuf/iobuf.o 00:05:05.757 CC module/event/subsystems/scheduler/scheduler.o 00:05:05.757 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:05.757 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:05.757 CC module/event/subsystems/sock/sock.o 00:05:05.757 CC module/event/subsystems/vmd/vmd.o 00:05:05.757 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:05.757 CC module/event/subsystems/fsdev/fsdev.o 00:05:06.016 LIB libspdk_event_sock.a 00:05:06.016 LIB libspdk_event_vmd.a 00:05:06.016 LIB libspdk_event_iobuf.a 00:05:06.016 LIB libspdk_event_vhost_blk.a 00:05:06.016 LIB libspdk_event_fsdev.a 00:05:06.016 LIB libspdk_event_keyring.a 00:05:06.016 SO libspdk_event_sock.so.5.0 00:05:06.016 LIB libspdk_event_scheduler.a 00:05:06.016 SO libspdk_event_fsdev.so.1.0 00:05:06.016 SO libspdk_event_vhost_blk.so.3.0 00:05:06.016 SO libspdk_event_vmd.so.6.0 00:05:06.016 SO libspdk_event_keyring.so.1.0 00:05:06.016 SO libspdk_event_iobuf.so.3.0 00:05:06.016 SO libspdk_event_scheduler.so.4.0 00:05:06.016 SYMLINK libspdk_event_sock.so 00:05:06.016 SYMLINK libspdk_event_fsdev.so 00:05:06.016 SYMLINK libspdk_event_keyring.so 00:05:06.016 SYMLINK libspdk_event_vhost_blk.so 00:05:06.016 SYMLINK libspdk_event_vmd.so 00:05:06.016 SYMLINK libspdk_event_iobuf.so 00:05:06.016 SYMLINK libspdk_event_scheduler.so 00:05:06.584 CC module/event/subsystems/accel/accel.o 00:05:06.584 LIB libspdk_event_accel.a 00:05:06.584 SO libspdk_event_accel.so.6.0 00:05:06.843 SYMLINK libspdk_event_accel.so 00:05:07.102 CC module/event/subsystems/bdev/bdev.o 00:05:07.360 LIB libspdk_event_bdev.a 00:05:07.360 SO libspdk_event_bdev.so.6.0 00:05:07.619 SYMLINK libspdk_event_bdev.so 00:05:07.876 CC module/event/subsystems/scsi/scsi.o 00:05:07.876 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:07.876 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:07.876 CC module/event/subsystems/nbd/nbd.o 00:05:07.876 CC module/event/subsystems/ublk/ublk.o 00:05:08.135 LIB libspdk_event_nbd.a 00:05:08.135 LIB libspdk_event_scsi.a 00:05:08.135 SO libspdk_event_nbd.so.6.0 00:05:08.135 SO libspdk_event_scsi.so.6.0 00:05:08.135 LIB libspdk_event_ublk.a 00:05:08.135 SO libspdk_event_ublk.so.3.0 00:05:08.135 SYMLINK libspdk_event_scsi.so 00:05:08.135 SYMLINK libspdk_event_nbd.so 00:05:08.135 LIB libspdk_event_nvmf.a 00:05:08.135 SO libspdk_event_nvmf.so.6.0 00:05:08.135 SYMLINK libspdk_event_ublk.so 00:05:08.394 SYMLINK libspdk_event_nvmf.so 00:05:08.653 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:08.653 CC module/event/subsystems/iscsi/iscsi.o 00:05:08.653 LIB libspdk_event_vhost_scsi.a 00:05:08.653 LIB libspdk_event_iscsi.a 00:05:08.653 SO libspdk_event_vhost_scsi.so.3.0 00:05:08.653 SO libspdk_event_iscsi.so.6.0 00:05:08.912 SYMLINK libspdk_event_vhost_scsi.so 00:05:08.912 SYMLINK libspdk_event_iscsi.so 00:05:09.172 SO libspdk.so.6.0 00:05:09.172 SYMLINK libspdk.so 00:05:09.433 CC app/trace_record/trace_record.o 00:05:09.433 CXX app/trace/trace.o 00:05:09.433 TEST_HEADER include/spdk/accel.h 00:05:09.433 TEST_HEADER include/spdk/accel_module.h 00:05:09.433 TEST_HEADER include/spdk/assert.h 00:05:09.433 TEST_HEADER include/spdk/barrier.h 00:05:09.433 TEST_HEADER include/spdk/base64.h 00:05:09.433 TEST_HEADER include/spdk/bdev.h 00:05:09.433 TEST_HEADER include/spdk/bdev_module.h 00:05:09.433 TEST_HEADER include/spdk/bdev_zone.h 00:05:09.433 TEST_HEADER include/spdk/bit_array.h 00:05:09.433 TEST_HEADER include/spdk/bit_pool.h 00:05:09.433 TEST_HEADER include/spdk/blob_bdev.h 00:05:09.433 CC app/nvmf_tgt/nvmf_main.o 00:05:09.433 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:09.433 TEST_HEADER include/spdk/blobfs.h 00:05:09.433 TEST_HEADER include/spdk/blob.h 00:05:09.433 TEST_HEADER include/spdk/conf.h 00:05:09.433 TEST_HEADER include/spdk/config.h 00:05:09.433 TEST_HEADER include/spdk/cpuset.h 00:05:09.433 TEST_HEADER include/spdk/crc16.h 00:05:09.433 TEST_HEADER include/spdk/crc32.h 00:05:09.433 TEST_HEADER include/spdk/crc64.h 00:05:09.433 TEST_HEADER include/spdk/dif.h 00:05:09.433 TEST_HEADER include/spdk/dma.h 00:05:09.433 TEST_HEADER include/spdk/endian.h 00:05:09.433 TEST_HEADER include/spdk/env_dpdk.h 00:05:09.433 TEST_HEADER include/spdk/env.h 00:05:09.433 TEST_HEADER include/spdk/event.h 00:05:09.433 TEST_HEADER include/spdk/fd_group.h 00:05:09.433 TEST_HEADER include/spdk/fd.h 00:05:09.433 TEST_HEADER include/spdk/file.h 00:05:09.433 TEST_HEADER include/spdk/fsdev.h 00:05:09.433 TEST_HEADER include/spdk/fsdev_module.h 00:05:09.433 TEST_HEADER include/spdk/ftl.h 00:05:09.433 TEST_HEADER include/spdk/gpt_spec.h 00:05:09.433 CC examples/util/zipf/zipf.o 00:05:09.433 TEST_HEADER include/spdk/hexlify.h 00:05:09.433 TEST_HEADER include/spdk/histogram_data.h 00:05:09.433 TEST_HEADER include/spdk/idxd.h 00:05:09.433 TEST_HEADER include/spdk/idxd_spec.h 00:05:09.433 TEST_HEADER include/spdk/init.h 00:05:09.433 TEST_HEADER include/spdk/ioat.h 00:05:09.433 CC test/thread/poller_perf/poller_perf.o 00:05:09.433 TEST_HEADER include/spdk/ioat_spec.h 00:05:09.433 TEST_HEADER include/spdk/iscsi_spec.h 00:05:09.433 CC examples/ioat/perf/perf.o 00:05:09.433 TEST_HEADER include/spdk/json.h 00:05:09.433 TEST_HEADER include/spdk/jsonrpc.h 00:05:09.433 TEST_HEADER include/spdk/keyring.h 00:05:09.433 TEST_HEADER include/spdk/keyring_module.h 00:05:09.692 TEST_HEADER include/spdk/likely.h 00:05:09.692 TEST_HEADER include/spdk/log.h 00:05:09.692 TEST_HEADER include/spdk/lvol.h 00:05:09.692 TEST_HEADER include/spdk/md5.h 00:05:09.692 TEST_HEADER include/spdk/memory.h 00:05:09.692 TEST_HEADER include/spdk/mmio.h 00:05:09.692 TEST_HEADER include/spdk/nbd.h 00:05:09.692 TEST_HEADER include/spdk/net.h 00:05:09.692 TEST_HEADER include/spdk/notify.h 00:05:09.692 CC test/dma/test_dma/test_dma.o 00:05:09.692 TEST_HEADER include/spdk/nvme.h 00:05:09.692 TEST_HEADER include/spdk/nvme_intel.h 00:05:09.692 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:09.692 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:09.692 TEST_HEADER include/spdk/nvme_spec.h 00:05:09.692 CC test/app/bdev_svc/bdev_svc.o 00:05:09.692 TEST_HEADER include/spdk/nvme_zns.h 00:05:09.692 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:09.692 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:09.692 TEST_HEADER include/spdk/nvmf.h 00:05:09.692 TEST_HEADER include/spdk/nvmf_spec.h 00:05:09.692 TEST_HEADER include/spdk/nvmf_transport.h 00:05:09.692 TEST_HEADER include/spdk/opal.h 00:05:09.692 TEST_HEADER include/spdk/opal_spec.h 00:05:09.692 TEST_HEADER include/spdk/pci_ids.h 00:05:09.692 TEST_HEADER include/spdk/pipe.h 00:05:09.692 TEST_HEADER include/spdk/queue.h 00:05:09.692 TEST_HEADER include/spdk/reduce.h 00:05:09.692 TEST_HEADER include/spdk/rpc.h 00:05:09.692 TEST_HEADER include/spdk/scheduler.h 00:05:09.692 TEST_HEADER include/spdk/scsi.h 00:05:09.692 CC test/env/mem_callbacks/mem_callbacks.o 00:05:09.692 TEST_HEADER include/spdk/scsi_spec.h 00:05:09.692 TEST_HEADER include/spdk/sock.h 00:05:09.692 TEST_HEADER include/spdk/stdinc.h 00:05:09.692 TEST_HEADER include/spdk/string.h 00:05:09.692 TEST_HEADER include/spdk/thread.h 00:05:09.692 TEST_HEADER include/spdk/trace.h 00:05:09.692 TEST_HEADER include/spdk/trace_parser.h 00:05:09.692 TEST_HEADER include/spdk/tree.h 00:05:09.692 TEST_HEADER include/spdk/ublk.h 00:05:09.692 TEST_HEADER include/spdk/util.h 00:05:09.692 TEST_HEADER include/spdk/uuid.h 00:05:09.692 TEST_HEADER include/spdk/version.h 00:05:09.692 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:09.692 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:09.692 TEST_HEADER include/spdk/vhost.h 00:05:09.692 TEST_HEADER include/spdk/vmd.h 00:05:09.692 TEST_HEADER include/spdk/xor.h 00:05:09.692 TEST_HEADER include/spdk/zipf.h 00:05:09.692 CXX test/cpp_headers/accel.o 00:05:09.692 LINK nvmf_tgt 00:05:09.692 LINK poller_perf 00:05:09.692 LINK zipf 00:05:09.692 LINK spdk_trace_record 00:05:09.692 LINK bdev_svc 00:05:09.692 LINK ioat_perf 00:05:09.951 LINK spdk_trace 00:05:09.951 CXX test/cpp_headers/accel_module.o 00:05:09.951 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:09.951 CC app/iscsi_tgt/iscsi_tgt.o 00:05:09.951 CC examples/ioat/verify/verify.o 00:05:09.951 CC app/spdk_tgt/spdk_tgt.o 00:05:10.210 CXX test/cpp_headers/assert.o 00:05:10.210 LINK test_dma 00:05:10.210 CC examples/thread/thread/thread_ex.o 00:05:10.210 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:10.210 LINK mem_callbacks 00:05:10.210 LINK interrupt_tgt 00:05:10.210 CC examples/sock/hello_world/hello_sock.o 00:05:10.210 LINK iscsi_tgt 00:05:10.210 CXX test/cpp_headers/barrier.o 00:05:10.210 LINK verify 00:05:10.210 LINK spdk_tgt 00:05:10.469 CXX test/cpp_headers/base64.o 00:05:10.469 LINK thread 00:05:10.469 CC test/env/vtophys/vtophys.o 00:05:10.469 CXX test/cpp_headers/bdev.o 00:05:10.469 CC test/rpc_client/rpc_client_test.o 00:05:10.469 LINK hello_sock 00:05:10.728 CC app/spdk_lspci/spdk_lspci.o 00:05:10.728 LINK nvme_fuzz 00:05:10.728 LINK vtophys 00:05:10.728 CC test/event/event_perf/event_perf.o 00:05:10.728 CC test/blobfs/mkfs/mkfs.o 00:05:10.728 CXX test/cpp_headers/bdev_module.o 00:05:10.728 CXX test/cpp_headers/bdev_zone.o 00:05:10.728 CC test/accel/dif/dif.o 00:05:10.728 LINK rpc_client_test 00:05:10.728 LINK spdk_lspci 00:05:10.986 LINK event_perf 00:05:10.986 CC examples/vmd/lsvmd/lsvmd.o 00:05:10.986 CXX test/cpp_headers/bit_array.o 00:05:10.986 LINK mkfs 00:05:10.986 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:10.986 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:10.986 LINK lsvmd 00:05:10.986 CXX test/cpp_headers/bit_pool.o 00:05:11.245 CC app/spdk_nvme_perf/perf.o 00:05:11.245 CC test/event/reactor/reactor.o 00:05:11.245 CC examples/idxd/perf/perf.o 00:05:11.245 LINK env_dpdk_post_init 00:05:11.245 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:11.245 CC app/spdk_nvme_identify/identify.o 00:05:11.245 CXX test/cpp_headers/blob_bdev.o 00:05:11.245 CC examples/vmd/led/led.o 00:05:11.245 LINK reactor 00:05:11.504 CC test/env/memory/memory_ut.o 00:05:11.504 CXX test/cpp_headers/blobfs_bdev.o 00:05:11.504 LINK led 00:05:11.504 LINK hello_fsdev 00:05:11.504 LINK idxd_perf 00:05:11.504 LINK dif 00:05:11.504 CC test/event/reactor_perf/reactor_perf.o 00:05:11.763 CXX test/cpp_headers/blobfs.o 00:05:11.763 CXX test/cpp_headers/blob.o 00:05:11.763 LINK reactor_perf 00:05:11.763 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:11.763 CC test/env/pci/pci_ut.o 00:05:11.763 CXX test/cpp_headers/conf.o 00:05:12.021 CC examples/accel/perf/accel_perf.o 00:05:12.021 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:12.021 CC test/app/histogram_perf/histogram_perf.o 00:05:12.021 CXX test/cpp_headers/config.o 00:05:12.021 CC test/event/app_repeat/app_repeat.o 00:05:12.021 CXX test/cpp_headers/cpuset.o 00:05:12.021 LINK spdk_nvme_perf 00:05:12.021 LINK histogram_perf 00:05:12.280 LINK app_repeat 00:05:12.280 LINK spdk_nvme_identify 00:05:12.280 CXX test/cpp_headers/crc16.o 00:05:12.280 CXX test/cpp_headers/crc32.o 00:05:12.280 CXX test/cpp_headers/crc64.o 00:05:12.280 LINK pci_ut 00:05:12.539 LINK vhost_fuzz 00:05:12.539 CXX test/cpp_headers/dif.o 00:05:12.539 CC test/app/jsoncat/jsoncat.o 00:05:12.539 LINK accel_perf 00:05:12.539 CC app/spdk_nvme_discover/discovery_aer.o 00:05:12.539 CC test/event/scheduler/scheduler.o 00:05:12.539 CC app/spdk_top/spdk_top.o 00:05:12.539 CXX test/cpp_headers/dma.o 00:05:12.539 LINK jsoncat 00:05:12.808 LINK memory_ut 00:05:12.808 LINK spdk_nvme_discover 00:05:12.808 CC app/vhost/vhost.o 00:05:12.808 CXX test/cpp_headers/endian.o 00:05:12.808 LINK scheduler 00:05:12.808 CC app/spdk_dd/spdk_dd.o 00:05:12.808 LINK iscsi_fuzz 00:05:13.075 CXX test/cpp_headers/env_dpdk.o 00:05:13.075 LINK vhost 00:05:13.075 CC examples/blob/hello_world/hello_blob.o 00:05:13.075 CC test/app/stub/stub.o 00:05:13.075 CC examples/nvme/hello_world/hello_world.o 00:05:13.075 CC examples/nvme/reconnect/reconnect.o 00:05:13.075 CXX test/cpp_headers/env.o 00:05:13.075 CC app/fio/nvme/fio_plugin.o 00:05:13.334 LINK stub 00:05:13.334 LINK hello_blob 00:05:13.334 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:13.334 LINK spdk_dd 00:05:13.334 CC examples/nvme/arbitration/arbitration.o 00:05:13.334 CXX test/cpp_headers/event.o 00:05:13.334 LINK hello_world 00:05:13.593 CXX test/cpp_headers/fd_group.o 00:05:13.593 LINK reconnect 00:05:13.593 CC examples/blob/cli/blobcli.o 00:05:13.593 LINK spdk_top 00:05:13.593 CXX test/cpp_headers/fd.o 00:05:13.593 LINK arbitration 00:05:13.593 CC test/lvol/esnap/esnap.o 00:05:13.593 CXX test/cpp_headers/file.o 00:05:13.593 CC test/nvme/aer/aer.o 00:05:13.852 CC test/bdev/bdevio/bdevio.o 00:05:13.852 LINK spdk_nvme 00:05:13.852 CC test/nvme/reset/reset.o 00:05:13.852 LINK nvme_manage 00:05:13.852 CXX test/cpp_headers/fsdev.o 00:05:13.852 CC examples/nvme/hotplug/hotplug.o 00:05:13.852 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:14.111 CC app/fio/bdev/fio_plugin.o 00:05:14.111 CXX test/cpp_headers/fsdev_module.o 00:05:14.111 LINK aer 00:05:14.111 LINK blobcli 00:05:14.111 CC examples/nvme/abort/abort.o 00:05:14.111 LINK cmb_copy 00:05:14.111 LINK reset 00:05:14.111 LINK bdevio 00:05:14.111 LINK hotplug 00:05:14.391 CXX test/cpp_headers/ftl.o 00:05:14.391 CC test/nvme/sgl/sgl.o 00:05:14.391 CXX test/cpp_headers/gpt_spec.o 00:05:14.391 CC test/nvme/e2edp/nvme_dp.o 00:05:14.391 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:14.391 CC test/nvme/overhead/overhead.o 00:05:14.391 CXX test/cpp_headers/hexlify.o 00:05:14.391 CC test/nvme/err_injection/err_injection.o 00:05:14.650 LINK abort 00:05:14.650 LINK spdk_bdev 00:05:14.650 LINK sgl 00:05:14.650 LINK pmr_persistence 00:05:14.650 CXX test/cpp_headers/histogram_data.o 00:05:14.650 LINK err_injection 00:05:14.650 LINK nvme_dp 00:05:14.650 CXX test/cpp_headers/idxd.o 00:05:14.908 LINK overhead 00:05:14.908 CC examples/bdev/bdevperf/bdevperf.o 00:05:14.908 CC examples/bdev/hello_world/hello_bdev.o 00:05:14.908 CXX test/cpp_headers/idxd_spec.o 00:05:14.908 CC test/nvme/startup/startup.o 00:05:14.908 CC test/nvme/simple_copy/simple_copy.o 00:05:14.908 CC test/nvme/reserve/reserve.o 00:05:14.908 CC test/nvme/connect_stress/connect_stress.o 00:05:15.167 CC test/nvme/boot_partition/boot_partition.o 00:05:15.167 CC test/nvme/compliance/nvme_compliance.o 00:05:15.167 LINK hello_bdev 00:05:15.167 CXX test/cpp_headers/init.o 00:05:15.167 LINK startup 00:05:15.167 LINK reserve 00:05:15.167 LINK simple_copy 00:05:15.167 LINK connect_stress 00:05:15.167 LINK boot_partition 00:05:15.426 CXX test/cpp_headers/ioat.o 00:05:15.426 CXX test/cpp_headers/ioat_spec.o 00:05:15.426 CC test/nvme/fused_ordering/fused_ordering.o 00:05:15.426 CXX test/cpp_headers/iscsi_spec.o 00:05:15.426 LINK nvme_compliance 00:05:15.426 CXX test/cpp_headers/json.o 00:05:15.426 CXX test/cpp_headers/jsonrpc.o 00:05:15.426 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:15.426 CC test/nvme/fdp/fdp.o 00:05:15.684 CC test/nvme/cuse/cuse.o 00:05:15.684 CXX test/cpp_headers/keyring.o 00:05:15.684 LINK fused_ordering 00:05:15.684 CXX test/cpp_headers/keyring_module.o 00:05:15.684 CXX test/cpp_headers/likely.o 00:05:15.684 CXX test/cpp_headers/log.o 00:05:15.684 LINK doorbell_aers 00:05:15.684 CXX test/cpp_headers/lvol.o 00:05:15.684 LINK bdevperf 00:05:15.943 CXX test/cpp_headers/md5.o 00:05:15.943 CXX test/cpp_headers/memory.o 00:05:15.943 CXX test/cpp_headers/mmio.o 00:05:15.943 CXX test/cpp_headers/nbd.o 00:05:15.943 CXX test/cpp_headers/net.o 00:05:15.943 CXX test/cpp_headers/notify.o 00:05:15.943 LINK fdp 00:05:15.943 CXX test/cpp_headers/nvme.o 00:05:15.943 CXX test/cpp_headers/nvme_intel.o 00:05:15.943 CXX test/cpp_headers/nvme_ocssd.o 00:05:15.943 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:15.943 CXX test/cpp_headers/nvme_spec.o 00:05:16.202 CXX test/cpp_headers/nvme_zns.o 00:05:16.202 CXX test/cpp_headers/nvmf_cmd.o 00:05:16.202 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:16.202 CXX test/cpp_headers/nvmf.o 00:05:16.202 CXX test/cpp_headers/nvmf_spec.o 00:05:16.202 CXX test/cpp_headers/nvmf_transport.o 00:05:16.202 CXX test/cpp_headers/opal.o 00:05:16.202 CC examples/nvmf/nvmf/nvmf.o 00:05:16.202 CXX test/cpp_headers/opal_spec.o 00:05:16.202 CXX test/cpp_headers/pci_ids.o 00:05:16.460 CXX test/cpp_headers/pipe.o 00:05:16.460 CXX test/cpp_headers/queue.o 00:05:16.460 CXX test/cpp_headers/reduce.o 00:05:16.460 CXX test/cpp_headers/rpc.o 00:05:16.460 CXX test/cpp_headers/scheduler.o 00:05:16.460 CXX test/cpp_headers/scsi.o 00:05:16.460 CXX test/cpp_headers/scsi_spec.o 00:05:16.460 CXX test/cpp_headers/sock.o 00:05:16.460 CXX test/cpp_headers/stdinc.o 00:05:16.460 LINK nvmf 00:05:16.460 CXX test/cpp_headers/string.o 00:05:16.460 CXX test/cpp_headers/thread.o 00:05:16.722 CXX test/cpp_headers/trace.o 00:05:16.722 CXX test/cpp_headers/trace_parser.o 00:05:16.722 CXX test/cpp_headers/tree.o 00:05:16.722 CXX test/cpp_headers/ublk.o 00:05:16.722 CXX test/cpp_headers/util.o 00:05:16.722 CXX test/cpp_headers/uuid.o 00:05:16.722 CXX test/cpp_headers/version.o 00:05:16.722 CXX test/cpp_headers/vfio_user_pci.o 00:05:16.722 CXX test/cpp_headers/vfio_user_spec.o 00:05:16.722 CXX test/cpp_headers/vhost.o 00:05:16.722 CXX test/cpp_headers/vmd.o 00:05:16.722 CXX test/cpp_headers/xor.o 00:05:16.722 CXX test/cpp_headers/zipf.o 00:05:16.980 LINK cuse 00:05:20.268 LINK esnap 00:05:20.268 ************************************ 00:05:20.268 END TEST make 00:05:20.268 ************************************ 00:05:20.268 00:05:20.268 real 1m38.321s 00:05:20.268 user 8m35.269s 00:05:20.268 sys 2m7.862s 00:05:20.268 21:36:27 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:20.268 21:36:27 make -- common/autotest_common.sh@10 -- $ set +x 00:05:20.268 21:36:27 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:20.268 21:36:27 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:20.268 21:36:27 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:20.268 21:36:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:20.268 21:36:27 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:20.527 21:36:27 -- pm/common@44 -- $ pid=5291 00:05:20.527 21:36:27 -- pm/common@50 -- $ kill -TERM 5291 00:05:20.527 21:36:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:20.527 21:36:28 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:20.527 21:36:28 -- pm/common@44 -- $ pid=5293 00:05:20.527 21:36:28 -- pm/common@50 -- $ kill -TERM 5293 00:05:20.527 21:36:28 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:20.527 21:36:28 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:20.527 21:36:28 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:20.527 21:36:28 -- common/autotest_common.sh@1711 -- # lcov --version 00:05:20.527 21:36:28 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:20.527 21:36:28 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:20.527 21:36:28 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.527 21:36:28 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.527 21:36:28 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.527 21:36:28 -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.527 21:36:28 -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.527 21:36:28 -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.527 21:36:28 -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.527 21:36:28 -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.527 21:36:28 -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.527 21:36:28 -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.527 21:36:28 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.527 21:36:28 -- scripts/common.sh@344 -- # case "$op" in 00:05:20.527 21:36:28 -- scripts/common.sh@345 -- # : 1 00:05:20.527 21:36:28 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.527 21:36:28 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.527 21:36:28 -- scripts/common.sh@365 -- # decimal 1 00:05:20.527 21:36:28 -- scripts/common.sh@353 -- # local d=1 00:05:20.527 21:36:28 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.527 21:36:28 -- scripts/common.sh@355 -- # echo 1 00:05:20.527 21:36:28 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.527 21:36:28 -- scripts/common.sh@366 -- # decimal 2 00:05:20.527 21:36:28 -- scripts/common.sh@353 -- # local d=2 00:05:20.527 21:36:28 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.527 21:36:28 -- scripts/common.sh@355 -- # echo 2 00:05:20.527 21:36:28 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.527 21:36:28 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.527 21:36:28 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.527 21:36:28 -- scripts/common.sh@368 -- # return 0 00:05:20.528 21:36:28 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.528 21:36:28 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:20.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.528 --rc genhtml_branch_coverage=1 00:05:20.528 --rc genhtml_function_coverage=1 00:05:20.528 --rc genhtml_legend=1 00:05:20.528 --rc geninfo_all_blocks=1 00:05:20.528 --rc geninfo_unexecuted_blocks=1 00:05:20.528 00:05:20.528 ' 00:05:20.528 21:36:28 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:20.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.528 --rc genhtml_branch_coverage=1 00:05:20.528 --rc genhtml_function_coverage=1 00:05:20.528 --rc genhtml_legend=1 00:05:20.528 --rc geninfo_all_blocks=1 00:05:20.528 --rc geninfo_unexecuted_blocks=1 00:05:20.528 00:05:20.528 ' 00:05:20.528 21:36:28 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:20.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.528 --rc genhtml_branch_coverage=1 00:05:20.528 --rc genhtml_function_coverage=1 00:05:20.528 --rc genhtml_legend=1 00:05:20.528 --rc geninfo_all_blocks=1 00:05:20.528 --rc geninfo_unexecuted_blocks=1 00:05:20.528 00:05:20.528 ' 00:05:20.528 21:36:28 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:20.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.528 --rc genhtml_branch_coverage=1 00:05:20.528 --rc genhtml_function_coverage=1 00:05:20.528 --rc genhtml_legend=1 00:05:20.528 --rc geninfo_all_blocks=1 00:05:20.528 --rc geninfo_unexecuted_blocks=1 00:05:20.528 00:05:20.528 ' 00:05:20.528 21:36:28 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:20.528 21:36:28 -- nvmf/common.sh@7 -- # uname -s 00:05:20.528 21:36:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:20.528 21:36:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:20.528 21:36:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:20.528 21:36:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:20.528 21:36:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:20.528 21:36:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:20.528 21:36:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:20.528 21:36:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:20.528 21:36:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:20.528 21:36:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:20.788 21:36:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8114feff-7a9b-4189-b04e-c77dfee632c5 00:05:20.788 21:36:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=8114feff-7a9b-4189-b04e-c77dfee632c5 00:05:20.788 21:36:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:20.788 21:36:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:20.788 21:36:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:20.788 21:36:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:20.788 21:36:28 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:20.788 21:36:28 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:20.788 21:36:28 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:20.788 21:36:28 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:20.788 21:36:28 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:20.788 21:36:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.788 21:36:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.788 21:36:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.788 21:36:28 -- paths/export.sh@5 -- # export PATH 00:05:20.788 21:36:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.788 21:36:28 -- nvmf/common.sh@51 -- # : 0 00:05:20.788 21:36:28 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:20.788 21:36:28 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:20.788 21:36:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:20.788 21:36:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:20.788 21:36:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:20.788 21:36:28 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:20.788 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:20.788 21:36:28 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:20.788 21:36:28 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:20.788 21:36:28 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:20.788 21:36:28 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:20.788 21:36:28 -- spdk/autotest.sh@32 -- # uname -s 00:05:20.788 21:36:28 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:20.788 21:36:28 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:20.788 21:36:28 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:20.788 21:36:28 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:20.788 21:36:28 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:20.788 21:36:28 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:20.788 21:36:28 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:20.788 21:36:28 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:20.788 21:36:28 -- spdk/autotest.sh@48 -- # udevadm_pid=56082 00:05:20.788 21:36:28 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:20.788 21:36:28 -- pm/common@17 -- # local monitor 00:05:20.788 21:36:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:20.788 21:36:28 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:20.788 21:36:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:20.788 21:36:28 -- pm/common@25 -- # sleep 1 00:05:20.788 21:36:28 -- pm/common@21 -- # date +%s 00:05:20.788 21:36:28 -- pm/common@21 -- # date +%s 00:05:20.788 21:36:28 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733866588 00:05:20.788 21:36:28 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733866588 00:05:20.788 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733866588_collect-vmstat.pm.log 00:05:20.788 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733866588_collect-cpu-load.pm.log 00:05:21.735 21:36:29 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:21.735 21:36:29 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:21.735 21:36:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:21.735 21:36:29 -- common/autotest_common.sh@10 -- # set +x 00:05:21.735 21:36:29 -- spdk/autotest.sh@59 -- # create_test_list 00:05:21.735 21:36:29 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:21.735 21:36:29 -- common/autotest_common.sh@10 -- # set +x 00:05:21.735 21:36:29 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:21.735 21:36:29 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:21.735 21:36:29 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:21.735 21:36:29 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:21.735 21:36:29 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:21.735 21:36:29 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:21.735 21:36:29 -- common/autotest_common.sh@1457 -- # uname 00:05:21.735 21:36:29 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:21.735 21:36:29 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:21.735 21:36:29 -- common/autotest_common.sh@1477 -- # uname 00:05:21.735 21:36:29 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:21.735 21:36:29 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:21.735 21:36:29 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:21.995 lcov: LCOV version 1.15 00:05:21.995 21:36:29 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:36.907 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:36.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:54.997 21:37:01 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:54.997 21:37:01 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:54.997 21:37:01 -- common/autotest_common.sh@10 -- # set +x 00:05:54.997 21:37:01 -- spdk/autotest.sh@78 -- # rm -f 00:05:54.997 21:37:01 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:54.997 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:54.997 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:54.997 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:54.997 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:54.997 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:54.997 21:37:02 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:54.997 21:37:02 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:54.997 21:37:02 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:54.997 21:37:02 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:05:54.997 21:37:02 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:05:54.997 21:37:02 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:05:54.997 21:37:02 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:54.997 21:37:02 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:05:54.997 21:37:02 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:54.997 21:37:02 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:05:54.997 21:37:02 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:54.997 21:37:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:54.997 21:37:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:54.997 21:37:02 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:54.997 21:37:02 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:05:54.997 21:37:02 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:54.997 21:37:02 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:05:55.256 21:37:02 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:55.256 21:37:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:55.256 21:37:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:55.256 21:37:02 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:55.256 21:37:02 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:05:55.256 21:37:02 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:55.256 21:37:02 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:05:55.256 21:37:02 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:05:55.256 21:37:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:55.256 21:37:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:55.256 21:37:02 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:55.256 21:37:02 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:05:55.256 21:37:02 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:05:55.256 21:37:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:55.256 21:37:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:55.256 21:37:02 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:55.256 21:37:02 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:05:55.256 21:37:02 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:05:55.256 21:37:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:55.256 21:37:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:55.256 21:37:02 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:55.256 21:37:02 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:05:55.256 21:37:02 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:55.256 21:37:02 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:05:55.256 21:37:02 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:05:55.256 21:37:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:55.257 21:37:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:55.257 21:37:02 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:55.257 21:37:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:55.257 21:37:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:55.257 21:37:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:55.257 21:37:02 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:55.257 21:37:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:55.257 No valid GPT data, bailing 00:05:55.257 21:37:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:55.257 21:37:02 -- scripts/common.sh@394 -- # pt= 00:05:55.257 21:37:02 -- scripts/common.sh@395 -- # return 1 00:05:55.257 21:37:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:55.257 1+0 records in 00:05:55.257 1+0 records out 00:05:55.257 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0185795 s, 56.4 MB/s 00:05:55.257 21:37:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:55.257 21:37:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:55.257 21:37:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:55.257 21:37:02 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:55.257 21:37:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:55.257 No valid GPT data, bailing 00:05:55.257 21:37:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:55.257 21:37:02 -- scripts/common.sh@394 -- # pt= 00:05:55.257 21:37:02 -- scripts/common.sh@395 -- # return 1 00:05:55.257 21:37:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:55.257 1+0 records in 00:05:55.257 1+0 records out 00:05:55.257 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00689978 s, 152 MB/s 00:05:55.257 21:37:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:55.257 21:37:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:55.257 21:37:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:55.257 21:37:02 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:55.257 21:37:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:55.257 No valid GPT data, bailing 00:05:55.516 21:37:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:55.516 21:37:03 -- scripts/common.sh@394 -- # pt= 00:05:55.516 21:37:03 -- scripts/common.sh@395 -- # return 1 00:05:55.516 21:37:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:55.516 1+0 records in 00:05:55.516 1+0 records out 00:05:55.516 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00688391 s, 152 MB/s 00:05:55.516 21:37:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:55.516 21:37:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:55.516 21:37:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:05:55.516 21:37:03 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:05:55.516 21:37:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:55.516 No valid GPT data, bailing 00:05:55.516 21:37:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:55.516 21:37:03 -- scripts/common.sh@394 -- # pt= 00:05:55.516 21:37:03 -- scripts/common.sh@395 -- # return 1 00:05:55.516 21:37:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:55.516 1+0 records in 00:05:55.516 1+0 records out 00:05:55.516 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00627166 s, 167 MB/s 00:05:55.516 21:37:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:55.516 21:37:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:55.516 21:37:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:05:55.516 21:37:03 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:05:55.516 21:37:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:55.516 No valid GPT data, bailing 00:05:55.516 21:37:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:55.516 21:37:03 -- scripts/common.sh@394 -- # pt= 00:05:55.516 21:37:03 -- scripts/common.sh@395 -- # return 1 00:05:55.516 21:37:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:55.516 1+0 records in 00:05:55.516 1+0 records out 00:05:55.516 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00614094 s, 171 MB/s 00:05:55.516 21:37:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:55.516 21:37:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:55.516 21:37:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:55.517 21:37:03 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:55.517 21:37:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:55.517 No valid GPT data, bailing 00:05:55.776 21:37:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:55.776 21:37:03 -- scripts/common.sh@394 -- # pt= 00:05:55.776 21:37:03 -- scripts/common.sh@395 -- # return 1 00:05:55.776 21:37:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:55.776 1+0 records in 00:05:55.776 1+0 records out 00:05:55.776 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0064572 s, 162 MB/s 00:05:55.776 21:37:03 -- spdk/autotest.sh@105 -- # sync 00:05:55.776 21:37:03 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:55.776 21:37:03 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:55.776 21:37:03 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:57.680 21:37:05 -- spdk/autotest.sh@111 -- # uname -s 00:05:57.680 21:37:05 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:57.680 21:37:05 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:57.680 21:37:05 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:58.248 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:58.817 Hugepages 00:05:58.817 node hugesize free / total 00:05:58.817 node0 1048576kB 0 / 0 00:05:58.817 node0 2048kB 0 / 0 00:05:58.817 00:05:58.817 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:58.817 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:58.817 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:59.076 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:59.076 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:59.335 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:59.335 21:37:06 -- spdk/autotest.sh@117 -- # uname -s 00:05:59.335 21:37:06 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:59.335 21:37:06 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:59.335 21:37:06 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:59.902 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:00.499 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:00.758 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:00.758 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:00.758 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:00.758 21:37:08 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:01.696 21:37:09 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:01.696 21:37:09 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:01.696 21:37:09 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:01.696 21:37:09 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:01.696 21:37:09 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:01.696 21:37:09 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:01.696 21:37:09 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:01.956 21:37:09 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:01.956 21:37:09 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:01.956 21:37:09 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:06:01.956 21:37:09 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:01.956 21:37:09 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:02.524 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:02.524 Waiting for block devices as requested 00:06:02.782 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:02.782 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:02.782 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:03.045 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:08.325 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:08.325 21:37:15 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:08.325 21:37:15 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:08.325 21:37:15 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:08.325 21:37:15 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:06:08.325 21:37:15 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:08.325 21:37:15 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:08.325 21:37:15 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:08.325 21:37:15 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:06:08.325 21:37:15 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:06:08.325 21:37:15 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:06:08.325 21:37:15 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:06:08.325 21:37:15 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:08.325 21:37:15 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:08.325 21:37:15 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:08.325 21:37:15 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:08.325 21:37:15 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:08.325 21:37:15 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:08.325 21:37:15 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:08.325 21:37:15 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:06:08.325 21:37:15 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:08.325 21:37:15 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:08.325 21:37:15 -- common/autotest_common.sh@1543 -- # continue 00:06:08.325 21:37:15 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:08.326 21:37:15 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:08.326 21:37:15 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:08.326 21:37:15 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:06:08.326 21:37:15 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:08.326 21:37:15 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:08.326 21:37:15 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:08.326 21:37:15 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:08.326 21:37:15 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:08.326 21:37:15 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:08.326 21:37:15 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:08.326 21:37:15 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:08.326 21:37:15 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:08.326 21:37:15 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:08.326 21:37:15 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:08.326 21:37:15 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:08.326 21:37:15 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:08.326 21:37:15 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:08.326 21:37:15 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:08.326 21:37:15 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:08.326 21:37:15 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:08.326 21:37:15 -- common/autotest_common.sh@1543 -- # continue 00:06:08.326 21:37:15 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:08.326 21:37:15 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:06:08.326 21:37:15 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:08.326 21:37:15 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:06:08.326 21:37:15 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:08.326 21:37:15 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:06:08.326 21:37:15 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:08.326 21:37:15 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:06:08.326 21:37:15 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:06:08.326 21:37:15 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:06:08.326 21:37:15 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:06:08.326 21:37:15 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:08.326 21:37:15 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:08.326 21:37:15 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:08.326 21:37:15 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:08.326 21:37:15 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:08.326 21:37:15 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:06:08.326 21:37:15 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:08.326 21:37:15 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:08.326 21:37:15 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:08.326 21:37:15 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:08.326 21:37:15 -- common/autotest_common.sh@1543 -- # continue 00:06:08.326 21:37:15 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:08.326 21:37:15 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:06:08.326 21:37:15 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:08.326 21:37:15 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:06:08.326 21:37:15 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:08.326 21:37:15 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:06:08.326 21:37:15 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:08.326 21:37:15 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:06:08.326 21:37:15 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:06:08.326 21:37:15 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:06:08.326 21:37:15 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:08.326 21:37:15 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:06:08.326 21:37:15 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:08.326 21:37:15 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:08.326 21:37:15 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:08.326 21:37:15 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:08.326 21:37:15 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:08.326 21:37:15 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:06:08.326 21:37:15 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:08.326 21:37:15 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:08.326 21:37:15 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:08.326 21:37:15 -- common/autotest_common.sh@1543 -- # continue 00:06:08.326 21:37:15 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:08.326 21:37:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:08.326 21:37:15 -- common/autotest_common.sh@10 -- # set +x 00:06:08.326 21:37:15 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:08.326 21:37:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:08.326 21:37:15 -- common/autotest_common.sh@10 -- # set +x 00:06:08.326 21:37:15 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:09.263 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:09.831 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:09.831 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:09.831 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:09.831 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:10.090 21:37:17 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:10.090 21:37:17 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:10.090 21:37:17 -- common/autotest_common.sh@10 -- # set +x 00:06:10.090 21:37:17 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:10.090 21:37:17 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:10.090 21:37:17 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:10.090 21:37:17 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:10.090 21:37:17 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:10.090 21:37:17 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:10.090 21:37:17 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:10.090 21:37:17 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:10.090 21:37:17 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:10.090 21:37:17 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:10.090 21:37:17 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:10.090 21:37:17 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:10.090 21:37:17 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:10.090 21:37:17 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:06:10.090 21:37:17 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:10.090 21:37:17 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:10.090 21:37:17 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:10.090 21:37:17 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:10.090 21:37:17 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:10.090 21:37:17 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:10.090 21:37:17 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:10.090 21:37:17 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:10.090 21:37:17 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:10.090 21:37:17 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:10.090 21:37:17 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:06:10.090 21:37:17 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:10.090 21:37:17 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:10.090 21:37:17 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:10.090 21:37:17 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:06:10.090 21:37:17 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:10.090 21:37:17 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:10.090 21:37:17 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:06:10.090 21:37:17 -- common/autotest_common.sh@1572 -- # return 0 00:06:10.350 21:37:17 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:06:10.350 21:37:17 -- common/autotest_common.sh@1580 -- # return 0 00:06:10.350 21:37:17 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:10.350 21:37:17 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:10.350 21:37:17 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:10.350 21:37:17 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:10.350 21:37:17 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:10.350 21:37:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:10.350 21:37:17 -- common/autotest_common.sh@10 -- # set +x 00:06:10.350 21:37:17 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:10.350 21:37:17 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:10.350 21:37:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.350 21:37:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.350 21:37:17 -- common/autotest_common.sh@10 -- # set +x 00:06:10.350 ************************************ 00:06:10.350 START TEST env 00:06:10.350 ************************************ 00:06:10.350 21:37:17 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:10.350 * Looking for test storage... 00:06:10.350 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:10.350 21:37:17 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:10.350 21:37:17 env -- common/autotest_common.sh@1711 -- # lcov --version 00:06:10.350 21:37:17 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:10.350 21:37:18 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:10.350 21:37:18 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.350 21:37:18 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.350 21:37:18 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.350 21:37:18 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.350 21:37:18 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.350 21:37:18 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.350 21:37:18 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.350 21:37:18 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.350 21:37:18 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.350 21:37:18 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.350 21:37:18 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.350 21:37:18 env -- scripts/common.sh@344 -- # case "$op" in 00:06:10.350 21:37:18 env -- scripts/common.sh@345 -- # : 1 00:06:10.350 21:37:18 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.350 21:37:18 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.350 21:37:18 env -- scripts/common.sh@365 -- # decimal 1 00:06:10.350 21:37:18 env -- scripts/common.sh@353 -- # local d=1 00:06:10.350 21:37:18 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.350 21:37:18 env -- scripts/common.sh@355 -- # echo 1 00:06:10.350 21:37:18 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.350 21:37:18 env -- scripts/common.sh@366 -- # decimal 2 00:06:10.350 21:37:18 env -- scripts/common.sh@353 -- # local d=2 00:06:10.350 21:37:18 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.609 21:37:18 env -- scripts/common.sh@355 -- # echo 2 00:06:10.609 21:37:18 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.609 21:37:18 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.609 21:37:18 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.609 21:37:18 env -- scripts/common.sh@368 -- # return 0 00:06:10.609 21:37:18 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.609 21:37:18 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:10.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.609 --rc genhtml_branch_coverage=1 00:06:10.609 --rc genhtml_function_coverage=1 00:06:10.609 --rc genhtml_legend=1 00:06:10.609 --rc geninfo_all_blocks=1 00:06:10.609 --rc geninfo_unexecuted_blocks=1 00:06:10.609 00:06:10.609 ' 00:06:10.609 21:37:18 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:10.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.609 --rc genhtml_branch_coverage=1 00:06:10.609 --rc genhtml_function_coverage=1 00:06:10.609 --rc genhtml_legend=1 00:06:10.609 --rc geninfo_all_blocks=1 00:06:10.609 --rc geninfo_unexecuted_blocks=1 00:06:10.609 00:06:10.609 ' 00:06:10.609 21:37:18 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:10.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.609 --rc genhtml_branch_coverage=1 00:06:10.609 --rc genhtml_function_coverage=1 00:06:10.609 --rc genhtml_legend=1 00:06:10.609 --rc geninfo_all_blocks=1 00:06:10.609 --rc geninfo_unexecuted_blocks=1 00:06:10.609 00:06:10.609 ' 00:06:10.609 21:37:18 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:10.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.609 --rc genhtml_branch_coverage=1 00:06:10.609 --rc genhtml_function_coverage=1 00:06:10.609 --rc genhtml_legend=1 00:06:10.609 --rc geninfo_all_blocks=1 00:06:10.609 --rc geninfo_unexecuted_blocks=1 00:06:10.609 00:06:10.609 ' 00:06:10.609 21:37:18 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:10.610 21:37:18 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.610 21:37:18 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.610 21:37:18 env -- common/autotest_common.sh@10 -- # set +x 00:06:10.610 ************************************ 00:06:10.610 START TEST env_memory 00:06:10.610 ************************************ 00:06:10.610 21:37:18 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:10.610 00:06:10.610 00:06:10.610 CUnit - A unit testing framework for C - Version 2.1-3 00:06:10.610 http://cunit.sourceforge.net/ 00:06:10.610 00:06:10.610 00:06:10.610 Suite: memory 00:06:10.610 Test: alloc and free memory map ...[2024-12-10 21:37:18.172858] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:10.610 passed 00:06:10.610 Test: mem map translation ...[2024-12-10 21:37:18.217717] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:10.610 [2024-12-10 21:37:18.217821] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:10.610 [2024-12-10 21:37:18.217899] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:10.610 [2024-12-10 21:37:18.217928] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:10.610 passed 00:06:10.610 Test: mem map registration ...[2024-12-10 21:37:18.286065] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:10.610 [2024-12-10 21:37:18.286151] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:10.610 passed 00:06:10.869 Test: mem map adjacent registrations ...passed 00:06:10.869 00:06:10.869 Run Summary: Type Total Ran Passed Failed Inactive 00:06:10.869 suites 1 1 n/a 0 0 00:06:10.869 tests 4 4 4 0 0 00:06:10.869 asserts 152 152 152 0 n/a 00:06:10.869 00:06:10.869 Elapsed time = 0.246 seconds 00:06:10.869 00:06:10.869 real 0m0.303s 00:06:10.869 user 0m0.257s 00:06:10.869 sys 0m0.034s 00:06:10.869 21:37:18 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.869 ************************************ 00:06:10.869 END TEST env_memory 00:06:10.869 ************************************ 00:06:10.869 21:37:18 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:10.869 21:37:18 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:10.869 21:37:18 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.869 21:37:18 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.869 21:37:18 env -- common/autotest_common.sh@10 -- # set +x 00:06:10.869 ************************************ 00:06:10.869 START TEST env_vtophys 00:06:10.869 ************************************ 00:06:10.869 21:37:18 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:10.869 EAL: lib.eal log level changed from notice to debug 00:06:10.869 EAL: Detected lcore 0 as core 0 on socket 0 00:06:10.869 EAL: Detected lcore 1 as core 0 on socket 0 00:06:10.869 EAL: Detected lcore 2 as core 0 on socket 0 00:06:10.869 EAL: Detected lcore 3 as core 0 on socket 0 00:06:10.869 EAL: Detected lcore 4 as core 0 on socket 0 00:06:10.869 EAL: Detected lcore 5 as core 0 on socket 0 00:06:10.869 EAL: Detected lcore 6 as core 0 on socket 0 00:06:10.869 EAL: Detected lcore 7 as core 0 on socket 0 00:06:10.869 EAL: Detected lcore 8 as core 0 on socket 0 00:06:10.869 EAL: Detected lcore 9 as core 0 on socket 0 00:06:10.869 EAL: Maximum logical cores by configuration: 128 00:06:10.869 EAL: Detected CPU lcores: 10 00:06:10.869 EAL: Detected NUMA nodes: 1 00:06:10.869 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:10.869 EAL: Detected shared linkage of DPDK 00:06:10.869 EAL: No shared files mode enabled, IPC will be disabled 00:06:10.869 EAL: Selected IOVA mode 'PA' 00:06:10.869 EAL: Probing VFIO support... 00:06:10.869 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:10.869 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:10.869 EAL: Ask a virtual area of 0x2e000 bytes 00:06:10.869 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:10.869 EAL: Setting up physically contiguous memory... 00:06:10.869 EAL: Setting maximum number of open files to 524288 00:06:10.869 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:10.869 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:10.869 EAL: Ask a virtual area of 0x61000 bytes 00:06:10.869 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:10.869 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:10.869 EAL: Ask a virtual area of 0x400000000 bytes 00:06:10.869 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:10.869 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:10.869 EAL: Ask a virtual area of 0x61000 bytes 00:06:10.869 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:10.869 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:10.869 EAL: Ask a virtual area of 0x400000000 bytes 00:06:10.869 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:10.869 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:10.869 EAL: Ask a virtual area of 0x61000 bytes 00:06:10.869 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:10.869 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:10.869 EAL: Ask a virtual area of 0x400000000 bytes 00:06:10.869 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:10.869 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:10.869 EAL: Ask a virtual area of 0x61000 bytes 00:06:10.869 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:10.869 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:10.869 EAL: Ask a virtual area of 0x400000000 bytes 00:06:10.869 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:10.869 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:10.869 EAL: Hugepages will be freed exactly as allocated. 00:06:10.869 EAL: No shared files mode enabled, IPC is disabled 00:06:10.870 EAL: No shared files mode enabled, IPC is disabled 00:06:11.129 EAL: TSC frequency is ~2490000 KHz 00:06:11.129 EAL: Main lcore 0 is ready (tid=7fe3188daa40;cpuset=[0]) 00:06:11.129 EAL: Trying to obtain current memory policy. 00:06:11.129 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:11.129 EAL: Restoring previous memory policy: 0 00:06:11.129 EAL: request: mp_malloc_sync 00:06:11.129 EAL: No shared files mode enabled, IPC is disabled 00:06:11.129 EAL: Heap on socket 0 was expanded by 2MB 00:06:11.129 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:11.129 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:11.129 EAL: Mem event callback 'spdk:(nil)' registered 00:06:11.129 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:11.129 00:06:11.129 00:06:11.129 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.129 http://cunit.sourceforge.net/ 00:06:11.129 00:06:11.129 00:06:11.129 Suite: components_suite 00:06:11.697 Test: vtophys_malloc_test ...passed 00:06:11.697 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:11.697 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:11.697 EAL: Restoring previous memory policy: 4 00:06:11.697 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.697 EAL: request: mp_malloc_sync 00:06:11.697 EAL: No shared files mode enabled, IPC is disabled 00:06:11.697 EAL: Heap on socket 0 was expanded by 4MB 00:06:11.697 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.697 EAL: request: mp_malloc_sync 00:06:11.697 EAL: No shared files mode enabled, IPC is disabled 00:06:11.697 EAL: Heap on socket 0 was shrunk by 4MB 00:06:11.697 EAL: Trying to obtain current memory policy. 00:06:11.697 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:11.697 EAL: Restoring previous memory policy: 4 00:06:11.697 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.697 EAL: request: mp_malloc_sync 00:06:11.697 EAL: No shared files mode enabled, IPC is disabled 00:06:11.697 EAL: Heap on socket 0 was expanded by 6MB 00:06:11.697 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.697 EAL: request: mp_malloc_sync 00:06:11.697 EAL: No shared files mode enabled, IPC is disabled 00:06:11.697 EAL: Heap on socket 0 was shrunk by 6MB 00:06:11.697 EAL: Trying to obtain current memory policy. 00:06:11.697 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:11.697 EAL: Restoring previous memory policy: 4 00:06:11.697 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.697 EAL: request: mp_malloc_sync 00:06:11.697 EAL: No shared files mode enabled, IPC is disabled 00:06:11.697 EAL: Heap on socket 0 was expanded by 10MB 00:06:11.697 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.697 EAL: request: mp_malloc_sync 00:06:11.697 EAL: No shared files mode enabled, IPC is disabled 00:06:11.697 EAL: Heap on socket 0 was shrunk by 10MB 00:06:11.697 EAL: Trying to obtain current memory policy. 00:06:11.697 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:11.697 EAL: Restoring previous memory policy: 4 00:06:11.697 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.697 EAL: request: mp_malloc_sync 00:06:11.697 EAL: No shared files mode enabled, IPC is disabled 00:06:11.697 EAL: Heap on socket 0 was expanded by 18MB 00:06:11.697 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.697 EAL: request: mp_malloc_sync 00:06:11.697 EAL: No shared files mode enabled, IPC is disabled 00:06:11.697 EAL: Heap on socket 0 was shrunk by 18MB 00:06:11.697 EAL: Trying to obtain current memory policy. 00:06:11.697 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:11.697 EAL: Restoring previous memory policy: 4 00:06:11.697 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.697 EAL: request: mp_malloc_sync 00:06:11.697 EAL: No shared files mode enabled, IPC is disabled 00:06:11.697 EAL: Heap on socket 0 was expanded by 34MB 00:06:11.697 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.697 EAL: request: mp_malloc_sync 00:06:11.697 EAL: No shared files mode enabled, IPC is disabled 00:06:11.697 EAL: Heap on socket 0 was shrunk by 34MB 00:06:11.957 EAL: Trying to obtain current memory policy. 00:06:11.957 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:11.957 EAL: Restoring previous memory policy: 4 00:06:11.957 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.957 EAL: request: mp_malloc_sync 00:06:11.957 EAL: No shared files mode enabled, IPC is disabled 00:06:11.957 EAL: Heap on socket 0 was expanded by 66MB 00:06:11.957 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.957 EAL: request: mp_malloc_sync 00:06:11.957 EAL: No shared files mode enabled, IPC is disabled 00:06:11.957 EAL: Heap on socket 0 was shrunk by 66MB 00:06:12.216 EAL: Trying to obtain current memory policy. 00:06:12.216 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:12.216 EAL: Restoring previous memory policy: 4 00:06:12.216 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.216 EAL: request: mp_malloc_sync 00:06:12.216 EAL: No shared files mode enabled, IPC is disabled 00:06:12.216 EAL: Heap on socket 0 was expanded by 130MB 00:06:12.475 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.475 EAL: request: mp_malloc_sync 00:06:12.475 EAL: No shared files mode enabled, IPC is disabled 00:06:12.475 EAL: Heap on socket 0 was shrunk by 130MB 00:06:12.733 EAL: Trying to obtain current memory policy. 00:06:12.733 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:12.733 EAL: Restoring previous memory policy: 4 00:06:12.733 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.733 EAL: request: mp_malloc_sync 00:06:12.733 EAL: No shared files mode enabled, IPC is disabled 00:06:12.733 EAL: Heap on socket 0 was expanded by 258MB 00:06:13.300 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.300 EAL: request: mp_malloc_sync 00:06:13.300 EAL: No shared files mode enabled, IPC is disabled 00:06:13.300 EAL: Heap on socket 0 was shrunk by 258MB 00:06:13.866 EAL: Trying to obtain current memory policy. 00:06:13.866 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:13.866 EAL: Restoring previous memory policy: 4 00:06:13.866 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.866 EAL: request: mp_malloc_sync 00:06:13.866 EAL: No shared files mode enabled, IPC is disabled 00:06:13.866 EAL: Heap on socket 0 was expanded by 514MB 00:06:15.243 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.243 EAL: request: mp_malloc_sync 00:06:15.243 EAL: No shared files mode enabled, IPC is disabled 00:06:15.243 EAL: Heap on socket 0 was shrunk by 514MB 00:06:16.178 EAL: Trying to obtain current memory policy. 00:06:16.178 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:16.437 EAL: Restoring previous memory policy: 4 00:06:16.437 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.437 EAL: request: mp_malloc_sync 00:06:16.437 EAL: No shared files mode enabled, IPC is disabled 00:06:16.437 EAL: Heap on socket 0 was expanded by 1026MB 00:06:18.343 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.602 EAL: request: mp_malloc_sync 00:06:18.602 EAL: No shared files mode enabled, IPC is disabled 00:06:18.602 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:20.508 passed 00:06:20.508 00:06:20.508 Run Summary: Type Total Ran Passed Failed Inactive 00:06:20.508 suites 1 1 n/a 0 0 00:06:20.508 tests 2 2 2 0 0 00:06:20.508 asserts 5698 5698 5698 0 n/a 00:06:20.508 00:06:20.508 Elapsed time = 9.243 seconds 00:06:20.508 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.508 EAL: request: mp_malloc_sync 00:06:20.508 EAL: No shared files mode enabled, IPC is disabled 00:06:20.508 EAL: Heap on socket 0 was shrunk by 2MB 00:06:20.508 EAL: No shared files mode enabled, IPC is disabled 00:06:20.508 EAL: No shared files mode enabled, IPC is disabled 00:06:20.508 EAL: No shared files mode enabled, IPC is disabled 00:06:20.508 00:06:20.508 real 0m9.601s 00:06:20.508 user 0m8.208s 00:06:20.508 sys 0m1.225s 00:06:20.508 ************************************ 00:06:20.508 END TEST env_vtophys 00:06:20.508 ************************************ 00:06:20.508 21:37:28 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.508 21:37:28 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:20.508 21:37:28 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:20.508 21:37:28 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.508 21:37:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.508 21:37:28 env -- common/autotest_common.sh@10 -- # set +x 00:06:20.508 ************************************ 00:06:20.508 START TEST env_pci 00:06:20.508 ************************************ 00:06:20.508 21:37:28 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:20.508 00:06:20.508 00:06:20.508 CUnit - A unit testing framework for C - Version 2.1-3 00:06:20.508 http://cunit.sourceforge.net/ 00:06:20.508 00:06:20.508 00:06:20.508 Suite: pci 00:06:20.508 Test: pci_hook ...[2024-12-10 21:37:28.188115] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58956 has claimed it 00:06:20.508 passed 00:06:20.508 00:06:20.508 Run Summary: Type Total Ran Passed Failed Inactive 00:06:20.508 suites 1 1 n/a 0 0 00:06:20.508 tests 1 1 1 0 0 00:06:20.508 asserts 25 25 25 0 n/a 00:06:20.508 00:06:20.508 Elapsed time = 0.014 seconds 00:06:20.508 EAL: Cannot find device (10000:00:01.0) 00:06:20.508 EAL: Failed to attach device on primary process 00:06:20.767 00:06:20.767 real 0m0.105s 00:06:20.767 user 0m0.047s 00:06:20.767 sys 0m0.056s 00:06:20.767 ************************************ 00:06:20.767 END TEST env_pci 00:06:20.767 ************************************ 00:06:20.767 21:37:28 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.767 21:37:28 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:20.767 21:37:28 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:20.767 21:37:28 env -- env/env.sh@15 -- # uname 00:06:20.767 21:37:28 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:20.767 21:37:28 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:20.768 21:37:28 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:20.768 21:37:28 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:20.768 21:37:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.768 21:37:28 env -- common/autotest_common.sh@10 -- # set +x 00:06:20.768 ************************************ 00:06:20.768 START TEST env_dpdk_post_init 00:06:20.768 ************************************ 00:06:20.768 21:37:28 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:20.768 EAL: Detected CPU lcores: 10 00:06:20.768 EAL: Detected NUMA nodes: 1 00:06:20.768 EAL: Detected shared linkage of DPDK 00:06:20.768 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:20.768 EAL: Selected IOVA mode 'PA' 00:06:21.026 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:21.026 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:21.026 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:21.026 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:06:21.026 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:06:21.026 Starting DPDK initialization... 00:06:21.026 Starting SPDK post initialization... 00:06:21.026 SPDK NVMe probe 00:06:21.026 Attaching to 0000:00:10.0 00:06:21.026 Attaching to 0000:00:11.0 00:06:21.026 Attaching to 0000:00:12.0 00:06:21.026 Attaching to 0000:00:13.0 00:06:21.027 Attached to 0000:00:10.0 00:06:21.027 Attached to 0000:00:11.0 00:06:21.027 Attached to 0000:00:13.0 00:06:21.027 Attached to 0000:00:12.0 00:06:21.027 Cleaning up... 00:06:21.027 00:06:21.027 real 0m0.325s 00:06:21.027 user 0m0.110s 00:06:21.027 sys 0m0.118s 00:06:21.027 21:37:28 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.027 ************************************ 00:06:21.027 END TEST env_dpdk_post_init 00:06:21.027 ************************************ 00:06:21.027 21:37:28 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:21.027 21:37:28 env -- env/env.sh@26 -- # uname 00:06:21.027 21:37:28 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:21.027 21:37:28 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:21.027 21:37:28 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.027 21:37:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.027 21:37:28 env -- common/autotest_common.sh@10 -- # set +x 00:06:21.027 ************************************ 00:06:21.027 START TEST env_mem_callbacks 00:06:21.027 ************************************ 00:06:21.027 21:37:28 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:21.285 EAL: Detected CPU lcores: 10 00:06:21.285 EAL: Detected NUMA nodes: 1 00:06:21.285 EAL: Detected shared linkage of DPDK 00:06:21.285 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:21.285 EAL: Selected IOVA mode 'PA' 00:06:21.285 00:06:21.285 00:06:21.285 CUnit - A unit testing framework for C - Version 2.1-3 00:06:21.285 http://cunit.sourceforge.net/ 00:06:21.285 00:06:21.285 00:06:21.285 Suite: memory 00:06:21.285 Test: test ... 00:06:21.285 register 0x200000200000 2097152 00:06:21.285 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:21.285 malloc 3145728 00:06:21.285 register 0x200000400000 4194304 00:06:21.285 buf 0x2000004fffc0 len 3145728 PASSED 00:06:21.285 malloc 64 00:06:21.285 buf 0x2000004ffec0 len 64 PASSED 00:06:21.285 malloc 4194304 00:06:21.285 register 0x200000800000 6291456 00:06:21.285 buf 0x2000009fffc0 len 4194304 PASSED 00:06:21.285 free 0x2000004fffc0 3145728 00:06:21.285 free 0x2000004ffec0 64 00:06:21.285 unregister 0x200000400000 4194304 PASSED 00:06:21.285 free 0x2000009fffc0 4194304 00:06:21.285 unregister 0x200000800000 6291456 PASSED 00:06:21.285 malloc 8388608 00:06:21.285 register 0x200000400000 10485760 00:06:21.285 buf 0x2000005fffc0 len 8388608 PASSED 00:06:21.285 free 0x2000005fffc0 8388608 00:06:21.285 unregister 0x200000400000 10485760 PASSED 00:06:21.285 passed 00:06:21.285 00:06:21.285 Run Summary: Type Total Ran Passed Failed Inactive 00:06:21.285 suites 1 1 n/a 0 0 00:06:21.285 tests 1 1 1 0 0 00:06:21.285 asserts 15 15 15 0 n/a 00:06:21.285 00:06:21.285 Elapsed time = 0.081 seconds 00:06:21.285 00:06:21.285 real 0m0.292s 00:06:21.285 user 0m0.119s 00:06:21.285 sys 0m0.070s 00:06:21.285 21:37:29 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.285 21:37:29 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:21.543 ************************************ 00:06:21.543 END TEST env_mem_callbacks 00:06:21.543 ************************************ 00:06:21.543 ************************************ 00:06:21.543 END TEST env 00:06:21.543 ************************************ 00:06:21.543 00:06:21.543 real 0m11.225s 00:06:21.543 user 0m8.995s 00:06:21.543 sys 0m1.870s 00:06:21.543 21:37:29 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.543 21:37:29 env -- common/autotest_common.sh@10 -- # set +x 00:06:21.543 21:37:29 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:21.543 21:37:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.543 21:37:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.543 21:37:29 -- common/autotest_common.sh@10 -- # set +x 00:06:21.543 ************************************ 00:06:21.543 START TEST rpc 00:06:21.543 ************************************ 00:06:21.543 21:37:29 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:21.543 * Looking for test storage... 00:06:21.802 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:21.802 21:37:29 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:21.802 21:37:29 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:21.802 21:37:29 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:21.802 21:37:29 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:21.802 21:37:29 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.802 21:37:29 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.802 21:37:29 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.802 21:37:29 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.802 21:37:29 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.802 21:37:29 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.802 21:37:29 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.802 21:37:29 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.802 21:37:29 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.802 21:37:29 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.802 21:37:29 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.802 21:37:29 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:21.802 21:37:29 rpc -- scripts/common.sh@345 -- # : 1 00:06:21.802 21:37:29 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.802 21:37:29 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.802 21:37:29 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:21.802 21:37:29 rpc -- scripts/common.sh@353 -- # local d=1 00:06:21.802 21:37:29 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.802 21:37:29 rpc -- scripts/common.sh@355 -- # echo 1 00:06:21.802 21:37:29 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.802 21:37:29 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:21.802 21:37:29 rpc -- scripts/common.sh@353 -- # local d=2 00:06:21.802 21:37:29 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.802 21:37:29 rpc -- scripts/common.sh@355 -- # echo 2 00:06:21.802 21:37:29 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.802 21:37:29 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.802 21:37:29 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.802 21:37:29 rpc -- scripts/common.sh@368 -- # return 0 00:06:21.802 21:37:29 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.802 21:37:29 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:21.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.802 --rc genhtml_branch_coverage=1 00:06:21.802 --rc genhtml_function_coverage=1 00:06:21.802 --rc genhtml_legend=1 00:06:21.802 --rc geninfo_all_blocks=1 00:06:21.802 --rc geninfo_unexecuted_blocks=1 00:06:21.802 00:06:21.802 ' 00:06:21.802 21:37:29 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:21.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.802 --rc genhtml_branch_coverage=1 00:06:21.802 --rc genhtml_function_coverage=1 00:06:21.802 --rc genhtml_legend=1 00:06:21.802 --rc geninfo_all_blocks=1 00:06:21.802 --rc geninfo_unexecuted_blocks=1 00:06:21.802 00:06:21.802 ' 00:06:21.802 21:37:29 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:21.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.802 --rc genhtml_branch_coverage=1 00:06:21.802 --rc genhtml_function_coverage=1 00:06:21.802 --rc genhtml_legend=1 00:06:21.802 --rc geninfo_all_blocks=1 00:06:21.802 --rc geninfo_unexecuted_blocks=1 00:06:21.802 00:06:21.802 ' 00:06:21.802 21:37:29 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:21.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.802 --rc genhtml_branch_coverage=1 00:06:21.802 --rc genhtml_function_coverage=1 00:06:21.802 --rc genhtml_legend=1 00:06:21.802 --rc geninfo_all_blocks=1 00:06:21.802 --rc geninfo_unexecuted_blocks=1 00:06:21.802 00:06:21.802 ' 00:06:21.802 21:37:29 rpc -- rpc/rpc.sh@65 -- # spdk_pid=59089 00:06:21.802 21:37:29 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:21.802 21:37:29 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:21.802 21:37:29 rpc -- rpc/rpc.sh@67 -- # waitforlisten 59089 00:06:21.802 21:37:29 rpc -- common/autotest_common.sh@835 -- # '[' -z 59089 ']' 00:06:21.802 21:37:29 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.802 21:37:29 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.802 21:37:29 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.802 21:37:29 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.802 21:37:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.802 [2024-12-10 21:37:29.502275] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:06:21.802 [2024-12-10 21:37:29.502651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59089 ] 00:06:22.063 [2024-12-10 21:37:29.690568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.334 [2024-12-10 21:37:29.812380] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:22.334 [2024-12-10 21:37:29.812693] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 59089' to capture a snapshot of events at runtime. 00:06:22.334 [2024-12-10 21:37:29.812721] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:22.334 [2024-12-10 21:37:29.812739] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:22.334 [2024-12-10 21:37:29.812753] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid59089 for offline analysis/debug. 00:06:22.334 [2024-12-10 21:37:29.814086] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.270 21:37:30 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.270 21:37:30 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:23.270 21:37:30 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:23.270 21:37:30 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:23.270 21:37:30 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:23.270 21:37:30 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:23.270 21:37:30 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.270 21:37:30 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.270 21:37:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.270 ************************************ 00:06:23.270 START TEST rpc_integrity 00:06:23.270 ************************************ 00:06:23.270 21:37:30 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:23.270 21:37:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:23.270 21:37:30 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.270 21:37:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.270 21:37:30 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.270 21:37:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:23.270 21:37:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:23.270 21:37:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:23.270 21:37:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:23.270 21:37:30 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.270 21:37:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.529 21:37:30 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.529 21:37:31 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:23.529 21:37:31 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:23.529 21:37:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.529 21:37:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.529 21:37:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.529 21:37:31 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:23.529 { 00:06:23.529 "name": "Malloc0", 00:06:23.529 "aliases": [ 00:06:23.529 "83447f8d-ccd0-4dfe-a987-577a4e9945fa" 00:06:23.529 ], 00:06:23.529 "product_name": "Malloc disk", 00:06:23.529 "block_size": 512, 00:06:23.529 "num_blocks": 16384, 00:06:23.529 "uuid": "83447f8d-ccd0-4dfe-a987-577a4e9945fa", 00:06:23.530 "assigned_rate_limits": { 00:06:23.530 "rw_ios_per_sec": 0, 00:06:23.530 "rw_mbytes_per_sec": 0, 00:06:23.530 "r_mbytes_per_sec": 0, 00:06:23.530 "w_mbytes_per_sec": 0 00:06:23.530 }, 00:06:23.530 "claimed": false, 00:06:23.530 "zoned": false, 00:06:23.530 "supported_io_types": { 00:06:23.530 "read": true, 00:06:23.530 "write": true, 00:06:23.530 "unmap": true, 00:06:23.530 "flush": true, 00:06:23.530 "reset": true, 00:06:23.530 "nvme_admin": false, 00:06:23.530 "nvme_io": false, 00:06:23.530 "nvme_io_md": false, 00:06:23.530 "write_zeroes": true, 00:06:23.530 "zcopy": true, 00:06:23.530 "get_zone_info": false, 00:06:23.530 "zone_management": false, 00:06:23.530 "zone_append": false, 00:06:23.530 "compare": false, 00:06:23.530 "compare_and_write": false, 00:06:23.530 "abort": true, 00:06:23.530 "seek_hole": false, 00:06:23.530 "seek_data": false, 00:06:23.530 "copy": true, 00:06:23.530 "nvme_iov_md": false 00:06:23.530 }, 00:06:23.530 "memory_domains": [ 00:06:23.530 { 00:06:23.530 "dma_device_id": "system", 00:06:23.530 "dma_device_type": 1 00:06:23.530 }, 00:06:23.530 { 00:06:23.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:23.530 "dma_device_type": 2 00:06:23.530 } 00:06:23.530 ], 00:06:23.530 "driver_specific": {} 00:06:23.530 } 00:06:23.530 ]' 00:06:23.530 21:37:31 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:23.530 21:37:31 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:23.530 21:37:31 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:23.530 21:37:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.530 21:37:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.530 [2024-12-10 21:37:31.090318] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:23.530 [2024-12-10 21:37:31.090406] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:23.530 [2024-12-10 21:37:31.090446] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:23.530 [2024-12-10 21:37:31.090466] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:23.530 [2024-12-10 21:37:31.093385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:23.530 [2024-12-10 21:37:31.093444] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:23.530 Passthru0 00:06:23.530 21:37:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.530 21:37:31 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:23.530 21:37:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.530 21:37:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.530 21:37:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.530 21:37:31 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:23.530 { 00:06:23.530 "name": "Malloc0", 00:06:23.530 "aliases": [ 00:06:23.530 "83447f8d-ccd0-4dfe-a987-577a4e9945fa" 00:06:23.530 ], 00:06:23.530 "product_name": "Malloc disk", 00:06:23.530 "block_size": 512, 00:06:23.530 "num_blocks": 16384, 00:06:23.530 "uuid": "83447f8d-ccd0-4dfe-a987-577a4e9945fa", 00:06:23.530 "assigned_rate_limits": { 00:06:23.530 "rw_ios_per_sec": 0, 00:06:23.530 "rw_mbytes_per_sec": 0, 00:06:23.530 "r_mbytes_per_sec": 0, 00:06:23.530 "w_mbytes_per_sec": 0 00:06:23.530 }, 00:06:23.530 "claimed": true, 00:06:23.530 "claim_type": "exclusive_write", 00:06:23.530 "zoned": false, 00:06:23.530 "supported_io_types": { 00:06:23.530 "read": true, 00:06:23.530 "write": true, 00:06:23.530 "unmap": true, 00:06:23.530 "flush": true, 00:06:23.530 "reset": true, 00:06:23.530 "nvme_admin": false, 00:06:23.530 "nvme_io": false, 00:06:23.530 "nvme_io_md": false, 00:06:23.530 "write_zeroes": true, 00:06:23.530 "zcopy": true, 00:06:23.530 "get_zone_info": false, 00:06:23.530 "zone_management": false, 00:06:23.530 "zone_append": false, 00:06:23.530 "compare": false, 00:06:23.530 "compare_and_write": false, 00:06:23.530 "abort": true, 00:06:23.530 "seek_hole": false, 00:06:23.530 "seek_data": false, 00:06:23.530 "copy": true, 00:06:23.530 "nvme_iov_md": false 00:06:23.530 }, 00:06:23.530 "memory_domains": [ 00:06:23.530 { 00:06:23.530 "dma_device_id": "system", 00:06:23.530 "dma_device_type": 1 00:06:23.530 }, 00:06:23.530 { 00:06:23.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:23.530 "dma_device_type": 2 00:06:23.530 } 00:06:23.530 ], 00:06:23.530 "driver_specific": {} 00:06:23.530 }, 00:06:23.530 { 00:06:23.530 "name": "Passthru0", 00:06:23.530 "aliases": [ 00:06:23.530 "8bd2661a-e67d-511a-9376-4fe1fb9ea89e" 00:06:23.530 ], 00:06:23.530 "product_name": "passthru", 00:06:23.530 "block_size": 512, 00:06:23.530 "num_blocks": 16384, 00:06:23.530 "uuid": "8bd2661a-e67d-511a-9376-4fe1fb9ea89e", 00:06:23.530 "assigned_rate_limits": { 00:06:23.530 "rw_ios_per_sec": 0, 00:06:23.530 "rw_mbytes_per_sec": 0, 00:06:23.530 "r_mbytes_per_sec": 0, 00:06:23.530 "w_mbytes_per_sec": 0 00:06:23.530 }, 00:06:23.530 "claimed": false, 00:06:23.530 "zoned": false, 00:06:23.530 "supported_io_types": { 00:06:23.530 "read": true, 00:06:23.530 "write": true, 00:06:23.530 "unmap": true, 00:06:23.530 "flush": true, 00:06:23.530 "reset": true, 00:06:23.530 "nvme_admin": false, 00:06:23.530 "nvme_io": false, 00:06:23.530 "nvme_io_md": false, 00:06:23.530 "write_zeroes": true, 00:06:23.530 "zcopy": true, 00:06:23.530 "get_zone_info": false, 00:06:23.530 "zone_management": false, 00:06:23.530 "zone_append": false, 00:06:23.530 "compare": false, 00:06:23.530 "compare_and_write": false, 00:06:23.530 "abort": true, 00:06:23.530 "seek_hole": false, 00:06:23.530 "seek_data": false, 00:06:23.530 "copy": true, 00:06:23.530 "nvme_iov_md": false 00:06:23.530 }, 00:06:23.530 "memory_domains": [ 00:06:23.530 { 00:06:23.530 "dma_device_id": "system", 00:06:23.530 "dma_device_type": 1 00:06:23.530 }, 00:06:23.530 { 00:06:23.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:23.530 "dma_device_type": 2 00:06:23.530 } 00:06:23.530 ], 00:06:23.530 "driver_specific": { 00:06:23.530 "passthru": { 00:06:23.530 "name": "Passthru0", 00:06:23.530 "base_bdev_name": "Malloc0" 00:06:23.530 } 00:06:23.530 } 00:06:23.530 } 00:06:23.530 ]' 00:06:23.530 21:37:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:23.530 21:37:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:23.530 21:37:31 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:23.530 21:37:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.530 21:37:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.530 21:37:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.530 21:37:31 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:23.530 21:37:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.530 21:37:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.530 21:37:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.530 21:37:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:23.530 21:37:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.530 21:37:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.530 21:37:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.530 21:37:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:23.530 21:37:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:23.789 ************************************ 00:06:23.789 END TEST rpc_integrity 00:06:23.789 ************************************ 00:06:23.789 21:37:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:23.789 00:06:23.789 real 0m0.388s 00:06:23.789 user 0m0.220s 00:06:23.789 sys 0m0.054s 00:06:23.789 21:37:31 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.789 21:37:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.789 21:37:31 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:23.789 21:37:31 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.789 21:37:31 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.789 21:37:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.789 ************************************ 00:06:23.789 START TEST rpc_plugins 00:06:23.789 ************************************ 00:06:23.789 21:37:31 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:23.789 21:37:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:23.789 21:37:31 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.789 21:37:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:23.789 21:37:31 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.789 21:37:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:23.789 21:37:31 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:23.789 21:37:31 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.789 21:37:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:23.789 21:37:31 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.789 21:37:31 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:23.789 { 00:06:23.789 "name": "Malloc1", 00:06:23.789 "aliases": [ 00:06:23.789 "5fdcc32f-f1fa-4e99-9ec3-9e3b6b19c215" 00:06:23.789 ], 00:06:23.789 "product_name": "Malloc disk", 00:06:23.789 "block_size": 4096, 00:06:23.789 "num_blocks": 256, 00:06:23.789 "uuid": "5fdcc32f-f1fa-4e99-9ec3-9e3b6b19c215", 00:06:23.789 "assigned_rate_limits": { 00:06:23.789 "rw_ios_per_sec": 0, 00:06:23.789 "rw_mbytes_per_sec": 0, 00:06:23.789 "r_mbytes_per_sec": 0, 00:06:23.789 "w_mbytes_per_sec": 0 00:06:23.789 }, 00:06:23.789 "claimed": false, 00:06:23.789 "zoned": false, 00:06:23.789 "supported_io_types": { 00:06:23.789 "read": true, 00:06:23.789 "write": true, 00:06:23.789 "unmap": true, 00:06:23.789 "flush": true, 00:06:23.789 "reset": true, 00:06:23.789 "nvme_admin": false, 00:06:23.789 "nvme_io": false, 00:06:23.789 "nvme_io_md": false, 00:06:23.789 "write_zeroes": true, 00:06:23.789 "zcopy": true, 00:06:23.789 "get_zone_info": false, 00:06:23.789 "zone_management": false, 00:06:23.789 "zone_append": false, 00:06:23.789 "compare": false, 00:06:23.789 "compare_and_write": false, 00:06:23.789 "abort": true, 00:06:23.789 "seek_hole": false, 00:06:23.789 "seek_data": false, 00:06:23.789 "copy": true, 00:06:23.789 "nvme_iov_md": false 00:06:23.789 }, 00:06:23.789 "memory_domains": [ 00:06:23.789 { 00:06:23.789 "dma_device_id": "system", 00:06:23.789 "dma_device_type": 1 00:06:23.789 }, 00:06:23.789 { 00:06:23.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:23.789 "dma_device_type": 2 00:06:23.789 } 00:06:23.789 ], 00:06:23.789 "driver_specific": {} 00:06:23.789 } 00:06:23.789 ]' 00:06:23.789 21:37:31 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:23.789 21:37:31 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:23.789 21:37:31 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:23.789 21:37:31 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.789 21:37:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:23.789 21:37:31 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.789 21:37:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:23.789 21:37:31 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.789 21:37:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:23.789 21:37:31 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.789 21:37:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:23.789 21:37:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:23.789 ************************************ 00:06:23.789 END TEST rpc_plugins 00:06:23.789 ************************************ 00:06:23.789 21:37:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:23.789 00:06:23.789 real 0m0.161s 00:06:23.789 user 0m0.085s 00:06:23.789 sys 0m0.034s 00:06:23.789 21:37:31 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.789 21:37:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:24.048 21:37:31 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:24.048 21:37:31 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.048 21:37:31 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.048 21:37:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.048 ************************************ 00:06:24.048 START TEST rpc_trace_cmd_test 00:06:24.048 ************************************ 00:06:24.048 21:37:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:24.048 21:37:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:24.048 21:37:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:24.048 21:37:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.048 21:37:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.048 21:37:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.048 21:37:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:24.048 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid59089", 00:06:24.048 "tpoint_group_mask": "0x8", 00:06:24.048 "iscsi_conn": { 00:06:24.048 "mask": "0x2", 00:06:24.048 "tpoint_mask": "0x0" 00:06:24.048 }, 00:06:24.048 "scsi": { 00:06:24.048 "mask": "0x4", 00:06:24.048 "tpoint_mask": "0x0" 00:06:24.048 }, 00:06:24.048 "bdev": { 00:06:24.048 "mask": "0x8", 00:06:24.048 "tpoint_mask": "0xffffffffffffffff" 00:06:24.048 }, 00:06:24.048 "nvmf_rdma": { 00:06:24.048 "mask": "0x10", 00:06:24.048 "tpoint_mask": "0x0" 00:06:24.048 }, 00:06:24.048 "nvmf_tcp": { 00:06:24.048 "mask": "0x20", 00:06:24.048 "tpoint_mask": "0x0" 00:06:24.048 }, 00:06:24.048 "ftl": { 00:06:24.048 "mask": "0x40", 00:06:24.048 "tpoint_mask": "0x0" 00:06:24.048 }, 00:06:24.048 "blobfs": { 00:06:24.048 "mask": "0x80", 00:06:24.048 "tpoint_mask": "0x0" 00:06:24.048 }, 00:06:24.048 "dsa": { 00:06:24.048 "mask": "0x200", 00:06:24.048 "tpoint_mask": "0x0" 00:06:24.048 }, 00:06:24.048 "thread": { 00:06:24.048 "mask": "0x400", 00:06:24.048 "tpoint_mask": "0x0" 00:06:24.048 }, 00:06:24.048 "nvme_pcie": { 00:06:24.048 "mask": "0x800", 00:06:24.048 "tpoint_mask": "0x0" 00:06:24.048 }, 00:06:24.048 "iaa": { 00:06:24.048 "mask": "0x1000", 00:06:24.048 "tpoint_mask": "0x0" 00:06:24.048 }, 00:06:24.048 "nvme_tcp": { 00:06:24.048 "mask": "0x2000", 00:06:24.048 "tpoint_mask": "0x0" 00:06:24.048 }, 00:06:24.048 "bdev_nvme": { 00:06:24.048 "mask": "0x4000", 00:06:24.048 "tpoint_mask": "0x0" 00:06:24.048 }, 00:06:24.048 "sock": { 00:06:24.048 "mask": "0x8000", 00:06:24.048 "tpoint_mask": "0x0" 00:06:24.048 }, 00:06:24.048 "blob": { 00:06:24.048 "mask": "0x10000", 00:06:24.048 "tpoint_mask": "0x0" 00:06:24.048 }, 00:06:24.048 "bdev_raid": { 00:06:24.048 "mask": "0x20000", 00:06:24.048 "tpoint_mask": "0x0" 00:06:24.048 }, 00:06:24.048 "scheduler": { 00:06:24.048 "mask": "0x40000", 00:06:24.048 "tpoint_mask": "0x0" 00:06:24.048 } 00:06:24.048 }' 00:06:24.048 21:37:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:24.048 21:37:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:24.048 21:37:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:24.048 21:37:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:24.048 21:37:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:24.048 21:37:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:24.048 21:37:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:24.307 21:37:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:24.307 21:37:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:24.307 ************************************ 00:06:24.307 END TEST rpc_trace_cmd_test 00:06:24.307 ************************************ 00:06:24.307 21:37:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:24.307 00:06:24.307 real 0m0.250s 00:06:24.307 user 0m0.205s 00:06:24.307 sys 0m0.036s 00:06:24.307 21:37:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.307 21:37:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.307 21:37:31 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:24.307 21:37:31 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:24.307 21:37:31 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:24.307 21:37:31 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.307 21:37:31 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.307 21:37:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.307 ************************************ 00:06:24.307 START TEST rpc_daemon_integrity 00:06:24.307 ************************************ 00:06:24.307 21:37:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:24.307 21:37:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:24.307 21:37:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.307 21:37:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.307 21:37:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.307 21:37:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:24.307 21:37:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:24.307 21:37:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:24.307 21:37:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:24.307 21:37:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.307 21:37:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.307 21:37:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.307 21:37:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:24.307 21:37:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:24.307 21:37:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.307 21:37:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.307 21:37:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.307 21:37:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:24.307 { 00:06:24.307 "name": "Malloc2", 00:06:24.307 "aliases": [ 00:06:24.307 "35aa804f-ab5c-4ec4-b2da-03c4848fd5ac" 00:06:24.307 ], 00:06:24.307 "product_name": "Malloc disk", 00:06:24.307 "block_size": 512, 00:06:24.307 "num_blocks": 16384, 00:06:24.307 "uuid": "35aa804f-ab5c-4ec4-b2da-03c4848fd5ac", 00:06:24.307 "assigned_rate_limits": { 00:06:24.307 "rw_ios_per_sec": 0, 00:06:24.307 "rw_mbytes_per_sec": 0, 00:06:24.307 "r_mbytes_per_sec": 0, 00:06:24.307 "w_mbytes_per_sec": 0 00:06:24.307 }, 00:06:24.307 "claimed": false, 00:06:24.307 "zoned": false, 00:06:24.307 "supported_io_types": { 00:06:24.307 "read": true, 00:06:24.307 "write": true, 00:06:24.308 "unmap": true, 00:06:24.308 "flush": true, 00:06:24.308 "reset": true, 00:06:24.308 "nvme_admin": false, 00:06:24.308 "nvme_io": false, 00:06:24.308 "nvme_io_md": false, 00:06:24.308 "write_zeroes": true, 00:06:24.308 "zcopy": true, 00:06:24.308 "get_zone_info": false, 00:06:24.308 "zone_management": false, 00:06:24.308 "zone_append": false, 00:06:24.308 "compare": false, 00:06:24.308 "compare_and_write": false, 00:06:24.308 "abort": true, 00:06:24.308 "seek_hole": false, 00:06:24.308 "seek_data": false, 00:06:24.308 "copy": true, 00:06:24.308 "nvme_iov_md": false 00:06:24.308 }, 00:06:24.308 "memory_domains": [ 00:06:24.308 { 00:06:24.308 "dma_device_id": "system", 00:06:24.308 "dma_device_type": 1 00:06:24.308 }, 00:06:24.308 { 00:06:24.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:24.308 "dma_device_type": 2 00:06:24.308 } 00:06:24.308 ], 00:06:24.308 "driver_specific": {} 00:06:24.308 } 00:06:24.308 ]' 00:06:24.308 21:37:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:24.567 21:37:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:24.567 21:37:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:24.567 21:37:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.567 21:37:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.567 [2024-12-10 21:37:32.072084] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:24.567 [2024-12-10 21:37:32.072162] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:24.567 [2024-12-10 21:37:32.072191] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:06:24.567 [2024-12-10 21:37:32.072209] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:24.567 [2024-12-10 21:37:32.075105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:24.567 [2024-12-10 21:37:32.075282] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:24.567 Passthru0 00:06:24.567 21:37:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.567 21:37:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:24.567 21:37:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.567 21:37:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.567 21:37:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.567 21:37:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:24.567 { 00:06:24.567 "name": "Malloc2", 00:06:24.567 "aliases": [ 00:06:24.567 "35aa804f-ab5c-4ec4-b2da-03c4848fd5ac" 00:06:24.567 ], 00:06:24.567 "product_name": "Malloc disk", 00:06:24.567 "block_size": 512, 00:06:24.567 "num_blocks": 16384, 00:06:24.567 "uuid": "35aa804f-ab5c-4ec4-b2da-03c4848fd5ac", 00:06:24.567 "assigned_rate_limits": { 00:06:24.567 "rw_ios_per_sec": 0, 00:06:24.567 "rw_mbytes_per_sec": 0, 00:06:24.567 "r_mbytes_per_sec": 0, 00:06:24.567 "w_mbytes_per_sec": 0 00:06:24.567 }, 00:06:24.567 "claimed": true, 00:06:24.567 "claim_type": "exclusive_write", 00:06:24.567 "zoned": false, 00:06:24.567 "supported_io_types": { 00:06:24.567 "read": true, 00:06:24.567 "write": true, 00:06:24.567 "unmap": true, 00:06:24.567 "flush": true, 00:06:24.567 "reset": true, 00:06:24.567 "nvme_admin": false, 00:06:24.567 "nvme_io": false, 00:06:24.567 "nvme_io_md": false, 00:06:24.567 "write_zeroes": true, 00:06:24.567 "zcopy": true, 00:06:24.567 "get_zone_info": false, 00:06:24.567 "zone_management": false, 00:06:24.567 "zone_append": false, 00:06:24.567 "compare": false, 00:06:24.567 "compare_and_write": false, 00:06:24.567 "abort": true, 00:06:24.567 "seek_hole": false, 00:06:24.567 "seek_data": false, 00:06:24.567 "copy": true, 00:06:24.567 "nvme_iov_md": false 00:06:24.567 }, 00:06:24.567 "memory_domains": [ 00:06:24.567 { 00:06:24.567 "dma_device_id": "system", 00:06:24.567 "dma_device_type": 1 00:06:24.567 }, 00:06:24.567 { 00:06:24.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:24.567 "dma_device_type": 2 00:06:24.567 } 00:06:24.567 ], 00:06:24.567 "driver_specific": {} 00:06:24.567 }, 00:06:24.567 { 00:06:24.567 "name": "Passthru0", 00:06:24.567 "aliases": [ 00:06:24.567 "91da8ad6-cfa4-5de8-bc4f-91ed2afdf2bd" 00:06:24.567 ], 00:06:24.567 "product_name": "passthru", 00:06:24.567 "block_size": 512, 00:06:24.567 "num_blocks": 16384, 00:06:24.567 "uuid": "91da8ad6-cfa4-5de8-bc4f-91ed2afdf2bd", 00:06:24.567 "assigned_rate_limits": { 00:06:24.567 "rw_ios_per_sec": 0, 00:06:24.567 "rw_mbytes_per_sec": 0, 00:06:24.567 "r_mbytes_per_sec": 0, 00:06:24.567 "w_mbytes_per_sec": 0 00:06:24.567 }, 00:06:24.567 "claimed": false, 00:06:24.567 "zoned": false, 00:06:24.567 "supported_io_types": { 00:06:24.567 "read": true, 00:06:24.567 "write": true, 00:06:24.567 "unmap": true, 00:06:24.567 "flush": true, 00:06:24.567 "reset": true, 00:06:24.567 "nvme_admin": false, 00:06:24.567 "nvme_io": false, 00:06:24.567 "nvme_io_md": false, 00:06:24.567 "write_zeroes": true, 00:06:24.567 "zcopy": true, 00:06:24.567 "get_zone_info": false, 00:06:24.567 "zone_management": false, 00:06:24.567 "zone_append": false, 00:06:24.567 "compare": false, 00:06:24.567 "compare_and_write": false, 00:06:24.567 "abort": true, 00:06:24.567 "seek_hole": false, 00:06:24.567 "seek_data": false, 00:06:24.567 "copy": true, 00:06:24.567 "nvme_iov_md": false 00:06:24.567 }, 00:06:24.567 "memory_domains": [ 00:06:24.567 { 00:06:24.567 "dma_device_id": "system", 00:06:24.567 "dma_device_type": 1 00:06:24.567 }, 00:06:24.567 { 00:06:24.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:24.567 "dma_device_type": 2 00:06:24.567 } 00:06:24.567 ], 00:06:24.567 "driver_specific": { 00:06:24.567 "passthru": { 00:06:24.567 "name": "Passthru0", 00:06:24.567 "base_bdev_name": "Malloc2" 00:06:24.567 } 00:06:24.567 } 00:06:24.567 } 00:06:24.567 ]' 00:06:24.567 21:37:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:24.567 21:37:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:24.567 21:37:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:24.567 21:37:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.567 21:37:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.567 21:37:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.567 21:37:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:24.568 21:37:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.568 21:37:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.568 21:37:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.568 21:37:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:24.568 21:37:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.568 21:37:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.568 21:37:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.568 21:37:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:24.568 21:37:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:24.568 ************************************ 00:06:24.568 END TEST rpc_daemon_integrity 00:06:24.568 ************************************ 00:06:24.568 21:37:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:24.568 00:06:24.568 real 0m0.354s 00:06:24.568 user 0m0.175s 00:06:24.568 sys 0m0.055s 00:06:24.568 21:37:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.568 21:37:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.826 21:37:32 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:24.826 21:37:32 rpc -- rpc/rpc.sh@84 -- # killprocess 59089 00:06:24.826 21:37:32 rpc -- common/autotest_common.sh@954 -- # '[' -z 59089 ']' 00:06:24.826 21:37:32 rpc -- common/autotest_common.sh@958 -- # kill -0 59089 00:06:24.826 21:37:32 rpc -- common/autotest_common.sh@959 -- # uname 00:06:24.826 21:37:32 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.826 21:37:32 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59089 00:06:24.826 killing process with pid 59089 00:06:24.827 21:37:32 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:24.827 21:37:32 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:24.827 21:37:32 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59089' 00:06:24.827 21:37:32 rpc -- common/autotest_common.sh@973 -- # kill 59089 00:06:24.827 21:37:32 rpc -- common/autotest_common.sh@978 -- # wait 59089 00:06:28.111 00:06:28.111 real 0m5.961s 00:06:28.111 user 0m6.315s 00:06:28.111 sys 0m1.171s 00:06:28.111 21:37:35 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.111 ************************************ 00:06:28.111 END TEST rpc 00:06:28.111 ************************************ 00:06:28.111 21:37:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.111 21:37:35 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:28.111 21:37:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.111 21:37:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.111 21:37:35 -- common/autotest_common.sh@10 -- # set +x 00:06:28.111 ************************************ 00:06:28.111 START TEST skip_rpc 00:06:28.111 ************************************ 00:06:28.111 21:37:35 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:28.111 * Looking for test storage... 00:06:28.111 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:28.111 21:37:35 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:28.111 21:37:35 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:28.111 21:37:35 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:28.111 21:37:35 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:28.111 21:37:35 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.111 21:37:35 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.111 21:37:35 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.111 21:37:35 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.111 21:37:35 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.111 21:37:35 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.111 21:37:35 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.111 21:37:35 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.111 21:37:35 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.111 21:37:35 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.111 21:37:35 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.111 21:37:35 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:28.111 21:37:35 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:28.111 21:37:35 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.111 21:37:35 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.111 21:37:35 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:28.111 21:37:35 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:28.111 21:37:35 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.111 21:37:35 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:28.111 21:37:35 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.111 21:37:35 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:28.111 21:37:35 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:28.111 21:37:35 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.111 21:37:35 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:28.111 21:37:35 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.111 21:37:35 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.111 21:37:35 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.111 21:37:35 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:28.111 21:37:35 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.111 21:37:35 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:28.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.111 --rc genhtml_branch_coverage=1 00:06:28.111 --rc genhtml_function_coverage=1 00:06:28.111 --rc genhtml_legend=1 00:06:28.111 --rc geninfo_all_blocks=1 00:06:28.111 --rc geninfo_unexecuted_blocks=1 00:06:28.111 00:06:28.111 ' 00:06:28.111 21:37:35 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:28.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.111 --rc genhtml_branch_coverage=1 00:06:28.111 --rc genhtml_function_coverage=1 00:06:28.111 --rc genhtml_legend=1 00:06:28.111 --rc geninfo_all_blocks=1 00:06:28.111 --rc geninfo_unexecuted_blocks=1 00:06:28.111 00:06:28.111 ' 00:06:28.111 21:37:35 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:28.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.111 --rc genhtml_branch_coverage=1 00:06:28.111 --rc genhtml_function_coverage=1 00:06:28.111 --rc genhtml_legend=1 00:06:28.111 --rc geninfo_all_blocks=1 00:06:28.111 --rc geninfo_unexecuted_blocks=1 00:06:28.111 00:06:28.111 ' 00:06:28.111 21:37:35 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:28.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.111 --rc genhtml_branch_coverage=1 00:06:28.111 --rc genhtml_function_coverage=1 00:06:28.111 --rc genhtml_legend=1 00:06:28.111 --rc geninfo_all_blocks=1 00:06:28.111 --rc geninfo_unexecuted_blocks=1 00:06:28.111 00:06:28.111 ' 00:06:28.111 21:37:35 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:28.111 21:37:35 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:28.111 21:37:35 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:28.111 21:37:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.111 21:37:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.111 21:37:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.111 ************************************ 00:06:28.111 START TEST skip_rpc 00:06:28.111 ************************************ 00:06:28.111 21:37:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:28.111 21:37:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=59329 00:06:28.111 21:37:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:28.111 21:37:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:28.111 21:37:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:28.111 [2024-12-10 21:37:35.567966] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:06:28.111 [2024-12-10 21:37:35.568445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59329 ] 00:06:28.111 [2024-12-10 21:37:35.755242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.370 [2024-12-10 21:37:35.868700] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.721 21:37:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:33.721 21:37:40 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:33.721 21:37:40 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:33.721 21:37:40 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:33.721 21:37:40 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.721 21:37:40 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:33.721 21:37:40 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.721 21:37:40 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:33.721 21:37:40 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.721 21:37:40 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.721 21:37:40 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:33.721 21:37:40 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:33.721 21:37:40 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:33.721 21:37:40 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:33.721 21:37:40 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:33.721 21:37:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:33.721 21:37:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 59329 00:06:33.721 21:37:40 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 59329 ']' 00:06:33.721 21:37:40 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 59329 00:06:33.721 21:37:40 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:33.721 21:37:40 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.721 21:37:40 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59329 00:06:33.721 killing process with pid 59329 00:06:33.721 21:37:40 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.721 21:37:40 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.721 21:37:40 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59329' 00:06:33.721 21:37:40 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 59329 00:06:33.721 21:37:40 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 59329 00:06:35.625 00:06:35.625 real 0m7.766s 00:06:35.625 user 0m7.105s 00:06:35.625 sys 0m0.581s 00:06:35.625 21:37:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.625 21:37:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.625 ************************************ 00:06:35.625 END TEST skip_rpc 00:06:35.625 ************************************ 00:06:35.625 21:37:43 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:35.625 21:37:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.625 21:37:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.625 21:37:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.625 ************************************ 00:06:35.625 START TEST skip_rpc_with_json 00:06:35.625 ************************************ 00:06:35.625 21:37:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:35.625 21:37:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:35.626 21:37:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59433 00:06:35.626 21:37:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.626 21:37:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:35.626 21:37:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59433 00:06:35.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.626 21:37:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 59433 ']' 00:06:35.626 21:37:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.626 21:37:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.626 21:37:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.626 21:37:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.626 21:37:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:35.884 [2024-12-10 21:37:43.366995] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:06:35.884 [2024-12-10 21:37:43.367177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59433 ] 00:06:35.884 [2024-12-10 21:37:43.537378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.142 [2024-12-10 21:37:43.697688] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.079 21:37:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.079 21:37:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:37.079 21:37:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:37.079 21:37:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.079 21:37:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:37.079 [2024-12-10 21:37:44.734398] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:37.079 request: 00:06:37.079 { 00:06:37.079 "trtype": "tcp", 00:06:37.079 "method": "nvmf_get_transports", 00:06:37.079 "req_id": 1 00:06:37.079 } 00:06:37.079 Got JSON-RPC error response 00:06:37.079 response: 00:06:37.079 { 00:06:37.079 "code": -19, 00:06:37.079 "message": "No such device" 00:06:37.079 } 00:06:37.079 21:37:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:37.079 21:37:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:37.079 21:37:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.079 21:37:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:37.079 [2024-12-10 21:37:44.750489] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:37.079 21:37:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.079 21:37:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:37.079 21:37:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.079 21:37:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:37.337 21:37:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.337 21:37:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:37.337 { 00:06:37.337 "subsystems": [ 00:06:37.337 { 00:06:37.337 "subsystem": "fsdev", 00:06:37.337 "config": [ 00:06:37.337 { 00:06:37.337 "method": "fsdev_set_opts", 00:06:37.337 "params": { 00:06:37.337 "fsdev_io_pool_size": 65535, 00:06:37.337 "fsdev_io_cache_size": 256 00:06:37.337 } 00:06:37.337 } 00:06:37.337 ] 00:06:37.337 }, 00:06:37.337 { 00:06:37.337 "subsystem": "keyring", 00:06:37.337 "config": [] 00:06:37.337 }, 00:06:37.337 { 00:06:37.337 "subsystem": "iobuf", 00:06:37.337 "config": [ 00:06:37.337 { 00:06:37.337 "method": "iobuf_set_options", 00:06:37.337 "params": { 00:06:37.337 "small_pool_count": 8192, 00:06:37.337 "large_pool_count": 1024, 00:06:37.337 "small_bufsize": 8192, 00:06:37.337 "large_bufsize": 135168, 00:06:37.337 "enable_numa": false 00:06:37.337 } 00:06:37.337 } 00:06:37.337 ] 00:06:37.337 }, 00:06:37.337 { 00:06:37.337 "subsystem": "sock", 00:06:37.337 "config": [ 00:06:37.337 { 00:06:37.337 "method": "sock_set_default_impl", 00:06:37.337 "params": { 00:06:37.337 "impl_name": "posix" 00:06:37.337 } 00:06:37.337 }, 00:06:37.337 { 00:06:37.338 "method": "sock_impl_set_options", 00:06:37.338 "params": { 00:06:37.338 "impl_name": "ssl", 00:06:37.338 "recv_buf_size": 4096, 00:06:37.338 "send_buf_size": 4096, 00:06:37.338 "enable_recv_pipe": true, 00:06:37.338 "enable_quickack": false, 00:06:37.338 "enable_placement_id": 0, 00:06:37.338 "enable_zerocopy_send_server": true, 00:06:37.338 "enable_zerocopy_send_client": false, 00:06:37.338 "zerocopy_threshold": 0, 00:06:37.338 "tls_version": 0, 00:06:37.338 "enable_ktls": false 00:06:37.338 } 00:06:37.338 }, 00:06:37.338 { 00:06:37.338 "method": "sock_impl_set_options", 00:06:37.338 "params": { 00:06:37.338 "impl_name": "posix", 00:06:37.338 "recv_buf_size": 2097152, 00:06:37.338 "send_buf_size": 2097152, 00:06:37.338 "enable_recv_pipe": true, 00:06:37.338 "enable_quickack": false, 00:06:37.338 "enable_placement_id": 0, 00:06:37.338 "enable_zerocopy_send_server": true, 00:06:37.338 "enable_zerocopy_send_client": false, 00:06:37.338 "zerocopy_threshold": 0, 00:06:37.338 "tls_version": 0, 00:06:37.338 "enable_ktls": false 00:06:37.338 } 00:06:37.338 } 00:06:37.338 ] 00:06:37.338 }, 00:06:37.338 { 00:06:37.338 "subsystem": "vmd", 00:06:37.338 "config": [] 00:06:37.338 }, 00:06:37.338 { 00:06:37.338 "subsystem": "accel", 00:06:37.338 "config": [ 00:06:37.338 { 00:06:37.338 "method": "accel_set_options", 00:06:37.338 "params": { 00:06:37.338 "small_cache_size": 128, 00:06:37.338 "large_cache_size": 16, 00:06:37.338 "task_count": 2048, 00:06:37.338 "sequence_count": 2048, 00:06:37.338 "buf_count": 2048 00:06:37.338 } 00:06:37.338 } 00:06:37.338 ] 00:06:37.338 }, 00:06:37.338 { 00:06:37.338 "subsystem": "bdev", 00:06:37.338 "config": [ 00:06:37.338 { 00:06:37.338 "method": "bdev_set_options", 00:06:37.338 "params": { 00:06:37.338 "bdev_io_pool_size": 65535, 00:06:37.338 "bdev_io_cache_size": 256, 00:06:37.338 "bdev_auto_examine": true, 00:06:37.338 "iobuf_small_cache_size": 128, 00:06:37.338 "iobuf_large_cache_size": 16 00:06:37.338 } 00:06:37.338 }, 00:06:37.338 { 00:06:37.338 "method": "bdev_raid_set_options", 00:06:37.338 "params": { 00:06:37.338 "process_window_size_kb": 1024, 00:06:37.338 "process_max_bandwidth_mb_sec": 0 00:06:37.338 } 00:06:37.338 }, 00:06:37.338 { 00:06:37.338 "method": "bdev_iscsi_set_options", 00:06:37.338 "params": { 00:06:37.338 "timeout_sec": 30 00:06:37.338 } 00:06:37.338 }, 00:06:37.338 { 00:06:37.338 "method": "bdev_nvme_set_options", 00:06:37.338 "params": { 00:06:37.338 "action_on_timeout": "none", 00:06:37.338 "timeout_us": 0, 00:06:37.338 "timeout_admin_us": 0, 00:06:37.338 "keep_alive_timeout_ms": 10000, 00:06:37.338 "arbitration_burst": 0, 00:06:37.338 "low_priority_weight": 0, 00:06:37.338 "medium_priority_weight": 0, 00:06:37.338 "high_priority_weight": 0, 00:06:37.338 "nvme_adminq_poll_period_us": 10000, 00:06:37.338 "nvme_ioq_poll_period_us": 0, 00:06:37.338 "io_queue_requests": 0, 00:06:37.338 "delay_cmd_submit": true, 00:06:37.338 "transport_retry_count": 4, 00:06:37.338 "bdev_retry_count": 3, 00:06:37.338 "transport_ack_timeout": 0, 00:06:37.338 "ctrlr_loss_timeout_sec": 0, 00:06:37.338 "reconnect_delay_sec": 0, 00:06:37.338 "fast_io_fail_timeout_sec": 0, 00:06:37.338 "disable_auto_failback": false, 00:06:37.338 "generate_uuids": false, 00:06:37.338 "transport_tos": 0, 00:06:37.338 "nvme_error_stat": false, 00:06:37.338 "rdma_srq_size": 0, 00:06:37.338 "io_path_stat": false, 00:06:37.338 "allow_accel_sequence": false, 00:06:37.338 "rdma_max_cq_size": 0, 00:06:37.338 "rdma_cm_event_timeout_ms": 0, 00:06:37.338 "dhchap_digests": [ 00:06:37.338 "sha256", 00:06:37.338 "sha384", 00:06:37.338 "sha512" 00:06:37.338 ], 00:06:37.338 "dhchap_dhgroups": [ 00:06:37.338 "null", 00:06:37.338 "ffdhe2048", 00:06:37.338 "ffdhe3072", 00:06:37.338 "ffdhe4096", 00:06:37.338 "ffdhe6144", 00:06:37.338 "ffdhe8192" 00:06:37.338 ], 00:06:37.338 "rdma_umr_per_io": false 00:06:37.338 } 00:06:37.338 }, 00:06:37.338 { 00:06:37.338 "method": "bdev_nvme_set_hotplug", 00:06:37.338 "params": { 00:06:37.338 "period_us": 100000, 00:06:37.338 "enable": false 00:06:37.338 } 00:06:37.338 }, 00:06:37.338 { 00:06:37.338 "method": "bdev_wait_for_examine" 00:06:37.338 } 00:06:37.338 ] 00:06:37.338 }, 00:06:37.338 { 00:06:37.338 "subsystem": "scsi", 00:06:37.338 "config": null 00:06:37.338 }, 00:06:37.338 { 00:06:37.338 "subsystem": "scheduler", 00:06:37.338 "config": [ 00:06:37.338 { 00:06:37.338 "method": "framework_set_scheduler", 00:06:37.338 "params": { 00:06:37.338 "name": "static" 00:06:37.338 } 00:06:37.338 } 00:06:37.338 ] 00:06:37.338 }, 00:06:37.338 { 00:06:37.338 "subsystem": "vhost_scsi", 00:06:37.338 "config": [] 00:06:37.338 }, 00:06:37.338 { 00:06:37.338 "subsystem": "vhost_blk", 00:06:37.338 "config": [] 00:06:37.338 }, 00:06:37.338 { 00:06:37.338 "subsystem": "ublk", 00:06:37.338 "config": [] 00:06:37.338 }, 00:06:37.338 { 00:06:37.338 "subsystem": "nbd", 00:06:37.338 "config": [] 00:06:37.338 }, 00:06:37.338 { 00:06:37.338 "subsystem": "nvmf", 00:06:37.338 "config": [ 00:06:37.338 { 00:06:37.338 "method": "nvmf_set_config", 00:06:37.338 "params": { 00:06:37.338 "discovery_filter": "match_any", 00:06:37.338 "admin_cmd_passthru": { 00:06:37.338 "identify_ctrlr": false 00:06:37.338 }, 00:06:37.338 "dhchap_digests": [ 00:06:37.338 "sha256", 00:06:37.338 "sha384", 00:06:37.338 "sha512" 00:06:37.338 ], 00:06:37.338 "dhchap_dhgroups": [ 00:06:37.338 "null", 00:06:37.338 "ffdhe2048", 00:06:37.338 "ffdhe3072", 00:06:37.338 "ffdhe4096", 00:06:37.338 "ffdhe6144", 00:06:37.338 "ffdhe8192" 00:06:37.338 ] 00:06:37.338 } 00:06:37.338 }, 00:06:37.338 { 00:06:37.338 "method": "nvmf_set_max_subsystems", 00:06:37.338 "params": { 00:06:37.338 "max_subsystems": 1024 00:06:37.338 } 00:06:37.338 }, 00:06:37.338 { 00:06:37.338 "method": "nvmf_set_crdt", 00:06:37.338 "params": { 00:06:37.338 "crdt1": 0, 00:06:37.338 "crdt2": 0, 00:06:37.338 "crdt3": 0 00:06:37.338 } 00:06:37.338 }, 00:06:37.338 { 00:06:37.338 "method": "nvmf_create_transport", 00:06:37.338 "params": { 00:06:37.338 "trtype": "TCP", 00:06:37.338 "max_queue_depth": 128, 00:06:37.338 "max_io_qpairs_per_ctrlr": 127, 00:06:37.338 "in_capsule_data_size": 4096, 00:06:37.338 "max_io_size": 131072, 00:06:37.338 "io_unit_size": 131072, 00:06:37.338 "max_aq_depth": 128, 00:06:37.338 "num_shared_buffers": 511, 00:06:37.338 "buf_cache_size": 4294967295, 00:06:37.338 "dif_insert_or_strip": false, 00:06:37.338 "zcopy": false, 00:06:37.338 "c2h_success": true, 00:06:37.338 "sock_priority": 0, 00:06:37.338 "abort_timeout_sec": 1, 00:06:37.338 "ack_timeout": 0, 00:06:37.338 "data_wr_pool_size": 0 00:06:37.338 } 00:06:37.338 } 00:06:37.338 ] 00:06:37.338 }, 00:06:37.338 { 00:06:37.338 "subsystem": "iscsi", 00:06:37.338 "config": [ 00:06:37.338 { 00:06:37.338 "method": "iscsi_set_options", 00:06:37.338 "params": { 00:06:37.338 "node_base": "iqn.2016-06.io.spdk", 00:06:37.338 "max_sessions": 128, 00:06:37.338 "max_connections_per_session": 2, 00:06:37.338 "max_queue_depth": 64, 00:06:37.338 "default_time2wait": 2, 00:06:37.338 "default_time2retain": 20, 00:06:37.338 "first_burst_length": 8192, 00:06:37.338 "immediate_data": true, 00:06:37.338 "allow_duplicated_isid": false, 00:06:37.338 "error_recovery_level": 0, 00:06:37.338 "nop_timeout": 60, 00:06:37.338 "nop_in_interval": 30, 00:06:37.338 "disable_chap": false, 00:06:37.338 "require_chap": false, 00:06:37.338 "mutual_chap": false, 00:06:37.338 "chap_group": 0, 00:06:37.338 "max_large_datain_per_connection": 64, 00:06:37.338 "max_r2t_per_connection": 4, 00:06:37.338 "pdu_pool_size": 36864, 00:06:37.338 "immediate_data_pool_size": 16384, 00:06:37.338 "data_out_pool_size": 2048 00:06:37.338 } 00:06:37.338 } 00:06:37.338 ] 00:06:37.338 } 00:06:37.338 ] 00:06:37.338 } 00:06:37.338 21:37:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:37.338 21:37:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59433 00:06:37.338 21:37:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59433 ']' 00:06:37.338 21:37:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59433 00:06:37.338 21:37:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:37.338 21:37:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.338 21:37:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59433 00:06:37.338 killing process with pid 59433 00:06:37.338 21:37:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.338 21:37:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.338 21:37:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59433' 00:06:37.338 21:37:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59433 00:06:37.338 21:37:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59433 00:06:40.624 21:37:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59489 00:06:40.625 21:37:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:40.625 21:37:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:45.976 21:37:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59489 00:06:45.976 21:37:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59489 ']' 00:06:45.976 21:37:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59489 00:06:45.976 21:37:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:45.976 21:37:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.976 21:37:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59489 00:06:45.976 killing process with pid 59489 00:06:45.976 21:37:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.976 21:37:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.976 21:37:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59489' 00:06:45.976 21:37:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59489 00:06:45.976 21:37:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59489 00:06:47.881 21:37:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:47.881 21:37:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:47.881 00:06:47.881 real 0m12.101s 00:06:47.881 user 0m11.337s 00:06:47.881 sys 0m1.185s 00:06:47.881 21:37:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.881 ************************************ 00:06:47.881 END TEST skip_rpc_with_json 00:06:47.881 21:37:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:47.881 ************************************ 00:06:47.881 21:37:55 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:47.881 21:37:55 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.881 21:37:55 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.881 21:37:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.881 ************************************ 00:06:47.881 START TEST skip_rpc_with_delay 00:06:47.882 ************************************ 00:06:47.882 21:37:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:47.882 21:37:55 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:47.882 21:37:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:47.882 21:37:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:47.882 21:37:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:47.882 21:37:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.882 21:37:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:47.882 21:37:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.882 21:37:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:47.882 21:37:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.882 21:37:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:47.882 21:37:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:47.882 21:37:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:47.882 [2024-12-10 21:37:55.533634] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:47.882 21:37:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:47.882 21:37:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:47.882 21:37:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:47.882 21:37:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:47.882 00:06:47.882 real 0m0.193s 00:06:47.882 user 0m0.099s 00:06:47.882 sys 0m0.092s 00:06:47.882 ************************************ 00:06:47.882 END TEST skip_rpc_with_delay 00:06:47.882 ************************************ 00:06:47.882 21:37:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.882 21:37:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:48.141 21:37:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:48.141 21:37:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:48.141 21:37:55 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:48.141 21:37:55 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.141 21:37:55 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.141 21:37:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.141 ************************************ 00:06:48.141 START TEST exit_on_failed_rpc_init 00:06:48.141 ************************************ 00:06:48.141 21:37:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:48.141 21:37:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59628 00:06:48.141 21:37:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:48.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.141 21:37:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59628 00:06:48.141 21:37:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 59628 ']' 00:06:48.141 21:37:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.141 21:37:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.141 21:37:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.141 21:37:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.141 21:37:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:48.141 [2024-12-10 21:37:55.805528] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:06:48.141 [2024-12-10 21:37:55.805680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59628 ] 00:06:48.401 [2024-12-10 21:37:55.981535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.401 [2024-12-10 21:37:56.099263] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.780 21:37:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.780 21:37:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:49.780 21:37:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:49.780 21:37:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:49.780 21:37:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:49.780 21:37:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:49.780 21:37:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:49.780 21:37:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.780 21:37:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:49.780 21:37:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.780 21:37:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:49.780 21:37:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.780 21:37:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:49.780 21:37:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:49.780 21:37:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:49.780 [2024-12-10 21:37:57.238980] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:06:49.780 [2024-12-10 21:37:57.239151] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59652 ] 00:06:49.780 [2024-12-10 21:37:57.420263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.039 [2024-12-10 21:37:57.566585] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.039 [2024-12-10 21:37:57.566706] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:50.039 [2024-12-10 21:37:57.566724] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:50.039 [2024-12-10 21:37:57.566745] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:50.297 21:37:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:50.297 21:37:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:50.297 21:37:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:50.297 21:37:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:50.297 21:37:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:50.297 21:37:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:50.297 21:37:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:50.297 21:37:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59628 00:06:50.297 21:37:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 59628 ']' 00:06:50.297 21:37:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 59628 00:06:50.297 21:37:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:50.297 21:37:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:50.297 21:37:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59628 00:06:50.297 21:37:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:50.297 21:37:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:50.297 21:37:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59628' 00:06:50.297 killing process with pid 59628 00:06:50.297 21:37:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 59628 00:06:50.297 21:37:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 59628 00:06:52.831 00:06:52.831 real 0m4.778s 00:06:52.831 user 0m5.005s 00:06:52.831 sys 0m0.756s 00:06:52.831 21:38:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.831 ************************************ 00:06:52.831 END TEST exit_on_failed_rpc_init 00:06:52.831 ************************************ 00:06:52.831 21:38:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:52.831 21:38:00 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:52.831 ************************************ 00:06:52.831 END TEST skip_rpc 00:06:52.831 ************************************ 00:06:52.831 00:06:52.831 real 0m25.363s 00:06:52.831 user 0m23.787s 00:06:52.831 sys 0m2.901s 00:06:52.831 21:38:00 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.831 21:38:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.091 21:38:00 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:53.091 21:38:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.091 21:38:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.091 21:38:00 -- common/autotest_common.sh@10 -- # set +x 00:06:53.091 ************************************ 00:06:53.091 START TEST rpc_client 00:06:53.091 ************************************ 00:06:53.091 21:38:00 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:53.091 * Looking for test storage... 00:06:53.091 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:53.091 21:38:00 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:53.091 21:38:00 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:06:53.091 21:38:00 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:53.091 21:38:00 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:53.091 21:38:00 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.091 21:38:00 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.091 21:38:00 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.091 21:38:00 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.091 21:38:00 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.091 21:38:00 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.091 21:38:00 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.091 21:38:00 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.091 21:38:00 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.091 21:38:00 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.091 21:38:00 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.091 21:38:00 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:53.091 21:38:00 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:53.091 21:38:00 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.091 21:38:00 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.091 21:38:00 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:53.091 21:38:00 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:53.091 21:38:00 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.091 21:38:00 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:53.091 21:38:00 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.091 21:38:00 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:53.091 21:38:00 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:53.091 21:38:00 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.091 21:38:00 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:53.350 21:38:00 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.350 21:38:00 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.350 21:38:00 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.350 21:38:00 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:53.350 21:38:00 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.350 21:38:00 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:53.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.350 --rc genhtml_branch_coverage=1 00:06:53.350 --rc genhtml_function_coverage=1 00:06:53.350 --rc genhtml_legend=1 00:06:53.350 --rc geninfo_all_blocks=1 00:06:53.350 --rc geninfo_unexecuted_blocks=1 00:06:53.350 00:06:53.350 ' 00:06:53.350 21:38:00 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:53.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.350 --rc genhtml_branch_coverage=1 00:06:53.350 --rc genhtml_function_coverage=1 00:06:53.350 --rc genhtml_legend=1 00:06:53.350 --rc geninfo_all_blocks=1 00:06:53.350 --rc geninfo_unexecuted_blocks=1 00:06:53.350 00:06:53.350 ' 00:06:53.350 21:38:00 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:53.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.350 --rc genhtml_branch_coverage=1 00:06:53.350 --rc genhtml_function_coverage=1 00:06:53.350 --rc genhtml_legend=1 00:06:53.350 --rc geninfo_all_blocks=1 00:06:53.350 --rc geninfo_unexecuted_blocks=1 00:06:53.350 00:06:53.350 ' 00:06:53.350 21:38:00 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:53.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.350 --rc genhtml_branch_coverage=1 00:06:53.350 --rc genhtml_function_coverage=1 00:06:53.350 --rc genhtml_legend=1 00:06:53.350 --rc geninfo_all_blocks=1 00:06:53.350 --rc geninfo_unexecuted_blocks=1 00:06:53.350 00:06:53.350 ' 00:06:53.350 21:38:00 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:53.350 OK 00:06:53.350 21:38:00 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:53.350 00:06:53.350 real 0m0.290s 00:06:53.350 user 0m0.156s 00:06:53.350 sys 0m0.151s 00:06:53.350 21:38:00 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.350 21:38:00 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:53.350 ************************************ 00:06:53.350 END TEST rpc_client 00:06:53.350 ************************************ 00:06:53.350 21:38:00 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:53.350 21:38:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.350 21:38:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.350 21:38:00 -- common/autotest_common.sh@10 -- # set +x 00:06:53.350 ************************************ 00:06:53.350 START TEST json_config 00:06:53.350 ************************************ 00:06:53.350 21:38:00 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:53.350 21:38:01 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:53.350 21:38:01 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:53.350 21:38:01 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:06:53.609 21:38:01 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:53.609 21:38:01 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.609 21:38:01 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.609 21:38:01 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.609 21:38:01 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.609 21:38:01 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.609 21:38:01 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.609 21:38:01 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.609 21:38:01 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.609 21:38:01 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.609 21:38:01 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.609 21:38:01 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.609 21:38:01 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:53.609 21:38:01 json_config -- scripts/common.sh@345 -- # : 1 00:06:53.609 21:38:01 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.609 21:38:01 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.609 21:38:01 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:53.609 21:38:01 json_config -- scripts/common.sh@353 -- # local d=1 00:06:53.609 21:38:01 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.609 21:38:01 json_config -- scripts/common.sh@355 -- # echo 1 00:06:53.609 21:38:01 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.609 21:38:01 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:53.609 21:38:01 json_config -- scripts/common.sh@353 -- # local d=2 00:06:53.609 21:38:01 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.609 21:38:01 json_config -- scripts/common.sh@355 -- # echo 2 00:06:53.609 21:38:01 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.609 21:38:01 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.609 21:38:01 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.609 21:38:01 json_config -- scripts/common.sh@368 -- # return 0 00:06:53.609 21:38:01 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.609 21:38:01 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:53.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.609 --rc genhtml_branch_coverage=1 00:06:53.609 --rc genhtml_function_coverage=1 00:06:53.609 --rc genhtml_legend=1 00:06:53.609 --rc geninfo_all_blocks=1 00:06:53.609 --rc geninfo_unexecuted_blocks=1 00:06:53.609 00:06:53.609 ' 00:06:53.609 21:38:01 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:53.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.609 --rc genhtml_branch_coverage=1 00:06:53.610 --rc genhtml_function_coverage=1 00:06:53.610 --rc genhtml_legend=1 00:06:53.610 --rc geninfo_all_blocks=1 00:06:53.610 --rc geninfo_unexecuted_blocks=1 00:06:53.610 00:06:53.610 ' 00:06:53.610 21:38:01 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:53.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.610 --rc genhtml_branch_coverage=1 00:06:53.610 --rc genhtml_function_coverage=1 00:06:53.610 --rc genhtml_legend=1 00:06:53.610 --rc geninfo_all_blocks=1 00:06:53.610 --rc geninfo_unexecuted_blocks=1 00:06:53.610 00:06:53.610 ' 00:06:53.610 21:38:01 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:53.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.610 --rc genhtml_branch_coverage=1 00:06:53.610 --rc genhtml_function_coverage=1 00:06:53.610 --rc genhtml_legend=1 00:06:53.610 --rc geninfo_all_blocks=1 00:06:53.610 --rc geninfo_unexecuted_blocks=1 00:06:53.610 00:06:53.610 ' 00:06:53.610 21:38:01 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:53.610 21:38:01 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:53.610 21:38:01 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:53.610 21:38:01 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:53.610 21:38:01 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:53.610 21:38:01 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:53.610 21:38:01 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:53.610 21:38:01 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:53.610 21:38:01 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:53.610 21:38:01 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:53.610 21:38:01 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:53.610 21:38:01 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:53.610 21:38:01 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8114feff-7a9b-4189-b04e-c77dfee632c5 00:06:53.610 21:38:01 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8114feff-7a9b-4189-b04e-c77dfee632c5 00:06:53.610 21:38:01 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:53.610 21:38:01 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:53.610 21:38:01 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:53.610 21:38:01 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:53.610 21:38:01 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:53.610 21:38:01 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:53.610 21:38:01 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:53.610 21:38:01 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:53.610 21:38:01 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:53.610 21:38:01 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.610 21:38:01 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.610 21:38:01 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.610 21:38:01 json_config -- paths/export.sh@5 -- # export PATH 00:06:53.610 21:38:01 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.610 21:38:01 json_config -- nvmf/common.sh@51 -- # : 0 00:06:53.610 21:38:01 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:53.610 21:38:01 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:53.610 21:38:01 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:53.610 21:38:01 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:53.610 21:38:01 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:53.610 21:38:01 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:53.610 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:53.610 21:38:01 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:53.610 21:38:01 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:53.610 21:38:01 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:53.610 21:38:01 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:53.610 21:38:01 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:53.610 21:38:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:53.610 WARNING: No tests are enabled so not running JSON configuration tests 00:06:53.610 21:38:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:53.610 21:38:01 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:53.610 21:38:01 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:53.610 21:38:01 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:53.610 ************************************ 00:06:53.610 END TEST json_config 00:06:53.610 ************************************ 00:06:53.610 00:06:53.610 real 0m0.261s 00:06:53.610 user 0m0.163s 00:06:53.610 sys 0m0.097s 00:06:53.610 21:38:01 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.610 21:38:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:53.610 21:38:01 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:53.610 21:38:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.610 21:38:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.610 21:38:01 -- common/autotest_common.sh@10 -- # set +x 00:06:53.610 ************************************ 00:06:53.610 START TEST json_config_extra_key 00:06:53.610 ************************************ 00:06:53.610 21:38:01 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:53.869 21:38:01 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:53.869 21:38:01 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:06:53.869 21:38:01 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:53.869 21:38:01 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:53.869 21:38:01 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.869 21:38:01 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.869 21:38:01 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.869 21:38:01 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.869 21:38:01 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.869 21:38:01 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.869 21:38:01 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.869 21:38:01 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.869 21:38:01 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.869 21:38:01 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.869 21:38:01 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.869 21:38:01 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:53.869 21:38:01 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:53.869 21:38:01 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.869 21:38:01 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.869 21:38:01 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:53.869 21:38:01 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:53.869 21:38:01 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.869 21:38:01 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:53.869 21:38:01 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.869 21:38:01 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:53.869 21:38:01 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:53.869 21:38:01 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.869 21:38:01 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:53.869 21:38:01 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.869 21:38:01 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.869 21:38:01 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.869 21:38:01 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:53.869 21:38:01 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.869 21:38:01 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:53.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.869 --rc genhtml_branch_coverage=1 00:06:53.869 --rc genhtml_function_coverage=1 00:06:53.869 --rc genhtml_legend=1 00:06:53.869 --rc geninfo_all_blocks=1 00:06:53.869 --rc geninfo_unexecuted_blocks=1 00:06:53.869 00:06:53.869 ' 00:06:53.869 21:38:01 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:53.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.869 --rc genhtml_branch_coverage=1 00:06:53.869 --rc genhtml_function_coverage=1 00:06:53.869 --rc genhtml_legend=1 00:06:53.869 --rc geninfo_all_blocks=1 00:06:53.869 --rc geninfo_unexecuted_blocks=1 00:06:53.869 00:06:53.869 ' 00:06:53.869 21:38:01 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:53.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.869 --rc genhtml_branch_coverage=1 00:06:53.869 --rc genhtml_function_coverage=1 00:06:53.869 --rc genhtml_legend=1 00:06:53.869 --rc geninfo_all_blocks=1 00:06:53.869 --rc geninfo_unexecuted_blocks=1 00:06:53.869 00:06:53.869 ' 00:06:53.869 21:38:01 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:53.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.869 --rc genhtml_branch_coverage=1 00:06:53.869 --rc genhtml_function_coverage=1 00:06:53.869 --rc genhtml_legend=1 00:06:53.869 --rc geninfo_all_blocks=1 00:06:53.869 --rc geninfo_unexecuted_blocks=1 00:06:53.869 00:06:53.869 ' 00:06:53.869 21:38:01 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:53.869 21:38:01 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:53.869 21:38:01 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:53.869 21:38:01 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:53.869 21:38:01 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:53.869 21:38:01 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:53.869 21:38:01 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:53.869 21:38:01 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:53.869 21:38:01 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:53.869 21:38:01 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:53.869 21:38:01 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:53.869 21:38:01 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:53.869 21:38:01 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8114feff-7a9b-4189-b04e-c77dfee632c5 00:06:53.869 21:38:01 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8114feff-7a9b-4189-b04e-c77dfee632c5 00:06:53.869 21:38:01 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:53.869 21:38:01 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:53.869 21:38:01 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:53.869 21:38:01 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:53.869 21:38:01 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:53.869 21:38:01 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:53.869 21:38:01 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:53.869 21:38:01 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:53.869 21:38:01 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:53.869 21:38:01 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.870 21:38:01 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.870 21:38:01 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.870 21:38:01 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:53.870 21:38:01 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.870 21:38:01 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:53.870 21:38:01 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:53.870 21:38:01 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:53.870 21:38:01 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:53.870 21:38:01 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:53.870 21:38:01 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:53.870 21:38:01 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:53.870 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:53.870 21:38:01 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:53.870 21:38:01 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:53.870 21:38:01 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:53.870 INFO: launching applications... 00:06:53.870 21:38:01 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:53.870 21:38:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:53.870 21:38:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:53.870 21:38:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:53.870 21:38:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:53.870 21:38:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:53.870 21:38:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:53.870 21:38:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:53.870 21:38:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:53.870 21:38:01 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:53.870 21:38:01 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:53.870 21:38:01 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:53.870 21:38:01 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:53.870 21:38:01 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:53.870 21:38:01 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:53.870 21:38:01 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:53.870 21:38:01 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:53.870 21:38:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:53.870 21:38:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:53.870 21:38:01 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59866 00:06:53.870 21:38:01 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:53.870 21:38:01 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:53.870 Waiting for target to run... 00:06:53.870 21:38:01 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59866 /var/tmp/spdk_tgt.sock 00:06:53.870 21:38:01 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 59866 ']' 00:06:53.870 21:38:01 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:53.870 21:38:01 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.870 21:38:01 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:53.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:53.870 21:38:01 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.870 21:38:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:54.129 [2024-12-10 21:38:01.645493] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:06:54.129 [2024-12-10 21:38:01.645905] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59866 ] 00:06:54.696 [2024-12-10 21:38:02.202699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.696 [2024-12-10 21:38:02.341951] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.631 21:38:03 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.631 21:38:03 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:55.631 21:38:03 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:55.631 00:06:55.631 INFO: shutting down applications... 00:06:55.631 21:38:03 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:55.631 21:38:03 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:55.631 21:38:03 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:55.631 21:38:03 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:55.631 21:38:03 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59866 ]] 00:06:55.631 21:38:03 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59866 00:06:55.631 21:38:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:55.631 21:38:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:55.631 21:38:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59866 00:06:55.631 21:38:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:56.199 21:38:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:56.199 21:38:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:56.199 21:38:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59866 00:06:56.199 21:38:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:56.765 21:38:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:56.765 21:38:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:56.765 21:38:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59866 00:06:56.765 21:38:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:57.022 21:38:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:57.022 21:38:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:57.022 21:38:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59866 00:06:57.022 21:38:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:57.590 21:38:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:57.590 21:38:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:57.590 21:38:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59866 00:06:57.590 21:38:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:58.157 21:38:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:58.157 21:38:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:58.157 21:38:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59866 00:06:58.157 21:38:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:58.725 21:38:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:58.725 21:38:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:58.725 21:38:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59866 00:06:58.725 21:38:06 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:59.293 21:38:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:59.293 21:38:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:59.293 21:38:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59866 00:06:59.293 21:38:06 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:59.293 21:38:06 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:59.293 21:38:06 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:59.293 21:38:06 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:59.293 SPDK target shutdown done 00:06:59.293 Success 00:06:59.293 21:38:06 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:59.293 00:06:59.293 real 0m5.433s 00:06:59.293 user 0m4.745s 00:06:59.293 sys 0m0.789s 00:06:59.293 21:38:06 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.293 ************************************ 00:06:59.293 21:38:06 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:59.293 END TEST json_config_extra_key 00:06:59.293 ************************************ 00:06:59.293 21:38:06 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:59.293 21:38:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.293 21:38:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.293 21:38:06 -- common/autotest_common.sh@10 -- # set +x 00:06:59.293 ************************************ 00:06:59.293 START TEST alias_rpc 00:06:59.293 ************************************ 00:06:59.293 21:38:06 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:59.293 * Looking for test storage... 00:06:59.293 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:59.293 21:38:06 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:59.293 21:38:06 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:59.293 21:38:06 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:59.293 21:38:07 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:59.293 21:38:07 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.293 21:38:07 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.293 21:38:07 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.293 21:38:07 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.293 21:38:07 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.293 21:38:07 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.293 21:38:07 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.293 21:38:07 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.293 21:38:07 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.293 21:38:07 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.293 21:38:07 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.293 21:38:07 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:59.293 21:38:07 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:59.293 21:38:07 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.293 21:38:07 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.552 21:38:07 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:59.552 21:38:07 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:59.552 21:38:07 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.552 21:38:07 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:59.552 21:38:07 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.552 21:38:07 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:59.552 21:38:07 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:59.552 21:38:07 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.552 21:38:07 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:59.552 21:38:07 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.552 21:38:07 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.552 21:38:07 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.552 21:38:07 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:59.552 21:38:07 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.552 21:38:07 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:59.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.552 --rc genhtml_branch_coverage=1 00:06:59.552 --rc genhtml_function_coverage=1 00:06:59.552 --rc genhtml_legend=1 00:06:59.552 --rc geninfo_all_blocks=1 00:06:59.552 --rc geninfo_unexecuted_blocks=1 00:06:59.552 00:06:59.552 ' 00:06:59.552 21:38:07 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:59.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.552 --rc genhtml_branch_coverage=1 00:06:59.552 --rc genhtml_function_coverage=1 00:06:59.552 --rc genhtml_legend=1 00:06:59.552 --rc geninfo_all_blocks=1 00:06:59.552 --rc geninfo_unexecuted_blocks=1 00:06:59.552 00:06:59.552 ' 00:06:59.552 21:38:07 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:59.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.552 --rc genhtml_branch_coverage=1 00:06:59.552 --rc genhtml_function_coverage=1 00:06:59.552 --rc genhtml_legend=1 00:06:59.552 --rc geninfo_all_blocks=1 00:06:59.552 --rc geninfo_unexecuted_blocks=1 00:06:59.552 00:06:59.552 ' 00:06:59.552 21:38:07 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:59.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.552 --rc genhtml_branch_coverage=1 00:06:59.552 --rc genhtml_function_coverage=1 00:06:59.552 --rc genhtml_legend=1 00:06:59.552 --rc geninfo_all_blocks=1 00:06:59.552 --rc geninfo_unexecuted_blocks=1 00:06:59.552 00:06:59.552 ' 00:06:59.552 21:38:07 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:59.552 21:38:07 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59985 00:06:59.552 21:38:07 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:59.552 21:38:07 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59985 00:06:59.552 21:38:07 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 59985 ']' 00:06:59.552 21:38:07 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.552 21:38:07 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.552 21:38:07 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.552 21:38:07 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.552 21:38:07 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.552 [2024-12-10 21:38:07.165515] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:06:59.552 [2024-12-10 21:38:07.165665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59985 ] 00:06:59.822 [2024-12-10 21:38:07.355418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.822 [2024-12-10 21:38:07.514316] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.224 21:38:08 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.224 21:38:08 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:01.224 21:38:08 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:01.225 21:38:08 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59985 00:07:01.225 21:38:08 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 59985 ']' 00:07:01.225 21:38:08 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 59985 00:07:01.225 21:38:08 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:07:01.225 21:38:08 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.225 21:38:08 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59985 00:07:01.225 21:38:08 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:01.225 21:38:08 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:01.225 killing process with pid 59985 00:07:01.225 21:38:08 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59985' 00:07:01.225 21:38:08 alias_rpc -- common/autotest_common.sh@973 -- # kill 59985 00:07:01.225 21:38:08 alias_rpc -- common/autotest_common.sh@978 -- # wait 59985 00:07:04.512 00:07:04.512 real 0m4.868s 00:07:04.512 user 0m4.809s 00:07:04.512 sys 0m0.789s 00:07:04.512 21:38:11 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.512 21:38:11 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.512 ************************************ 00:07:04.512 END TEST alias_rpc 00:07:04.512 ************************************ 00:07:04.512 21:38:11 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:04.512 21:38:11 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:04.512 21:38:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.512 21:38:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.512 21:38:11 -- common/autotest_common.sh@10 -- # set +x 00:07:04.512 ************************************ 00:07:04.512 START TEST spdkcli_tcp 00:07:04.512 ************************************ 00:07:04.512 21:38:11 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:04.512 * Looking for test storage... 00:07:04.512 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:07:04.512 21:38:11 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:04.512 21:38:11 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:07:04.512 21:38:11 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:04.512 21:38:11 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:04.512 21:38:11 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.512 21:38:11 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.512 21:38:11 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.512 21:38:11 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.512 21:38:11 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.512 21:38:11 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.512 21:38:11 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.512 21:38:11 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.512 21:38:11 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.512 21:38:11 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.512 21:38:11 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.512 21:38:11 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:04.512 21:38:11 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:04.512 21:38:11 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.512 21:38:11 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.512 21:38:11 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:04.512 21:38:11 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:04.512 21:38:11 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.512 21:38:11 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:04.512 21:38:11 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.512 21:38:11 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:04.512 21:38:11 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:04.512 21:38:11 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.512 21:38:11 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:04.512 21:38:11 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.512 21:38:11 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.512 21:38:11 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.512 21:38:11 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:04.512 21:38:11 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.512 21:38:11 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:04.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.512 --rc genhtml_branch_coverage=1 00:07:04.512 --rc genhtml_function_coverage=1 00:07:04.512 --rc genhtml_legend=1 00:07:04.512 --rc geninfo_all_blocks=1 00:07:04.512 --rc geninfo_unexecuted_blocks=1 00:07:04.512 00:07:04.512 ' 00:07:04.512 21:38:11 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:04.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.512 --rc genhtml_branch_coverage=1 00:07:04.512 --rc genhtml_function_coverage=1 00:07:04.512 --rc genhtml_legend=1 00:07:04.512 --rc geninfo_all_blocks=1 00:07:04.512 --rc geninfo_unexecuted_blocks=1 00:07:04.512 00:07:04.512 ' 00:07:04.512 21:38:11 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:04.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.512 --rc genhtml_branch_coverage=1 00:07:04.512 --rc genhtml_function_coverage=1 00:07:04.512 --rc genhtml_legend=1 00:07:04.512 --rc geninfo_all_blocks=1 00:07:04.512 --rc geninfo_unexecuted_blocks=1 00:07:04.512 00:07:04.512 ' 00:07:04.512 21:38:11 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:04.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.512 --rc genhtml_branch_coverage=1 00:07:04.512 --rc genhtml_function_coverage=1 00:07:04.512 --rc genhtml_legend=1 00:07:04.512 --rc geninfo_all_blocks=1 00:07:04.512 --rc geninfo_unexecuted_blocks=1 00:07:04.512 00:07:04.512 ' 00:07:04.512 21:38:11 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:07:04.512 21:38:11 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:07:04.512 21:38:11 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:07:04.512 21:38:11 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:04.512 21:38:11 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:04.512 21:38:11 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:04.512 21:38:11 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:04.512 21:38:11 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:04.512 21:38:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:04.512 21:38:11 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=60098 00:07:04.512 21:38:11 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:04.512 21:38:11 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 60098 00:07:04.512 21:38:11 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 60098 ']' 00:07:04.512 21:38:11 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.512 21:38:11 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.513 21:38:11 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.513 21:38:11 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.513 21:38:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:04.513 [2024-12-10 21:38:12.076520] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:07:04.513 [2024-12-10 21:38:12.076654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60098 ] 00:07:04.771 [2024-12-10 21:38:12.264246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:04.771 [2024-12-10 21:38:12.414411] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.771 [2024-12-10 21:38:12.414448] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.734 21:38:13 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.734 21:38:13 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:07:05.734 21:38:13 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:05.734 21:38:13 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=60120 00:07:05.734 21:38:13 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:05.992 [ 00:07:05.993 "bdev_malloc_delete", 00:07:05.993 "bdev_malloc_create", 00:07:05.993 "bdev_null_resize", 00:07:05.993 "bdev_null_delete", 00:07:05.993 "bdev_null_create", 00:07:05.993 "bdev_nvme_cuse_unregister", 00:07:05.993 "bdev_nvme_cuse_register", 00:07:05.993 "bdev_opal_new_user", 00:07:05.993 "bdev_opal_set_lock_state", 00:07:05.993 "bdev_opal_delete", 00:07:05.993 "bdev_opal_get_info", 00:07:05.993 "bdev_opal_create", 00:07:05.993 "bdev_nvme_opal_revert", 00:07:05.993 "bdev_nvme_opal_init", 00:07:05.993 "bdev_nvme_send_cmd", 00:07:05.993 "bdev_nvme_set_keys", 00:07:05.993 "bdev_nvme_get_path_iostat", 00:07:05.993 "bdev_nvme_get_mdns_discovery_info", 00:07:05.993 "bdev_nvme_stop_mdns_discovery", 00:07:05.993 "bdev_nvme_start_mdns_discovery", 00:07:05.993 "bdev_nvme_set_multipath_policy", 00:07:05.993 "bdev_nvme_set_preferred_path", 00:07:05.993 "bdev_nvme_get_io_paths", 00:07:05.993 "bdev_nvme_remove_error_injection", 00:07:05.993 "bdev_nvme_add_error_injection", 00:07:05.993 "bdev_nvme_get_discovery_info", 00:07:05.993 "bdev_nvme_stop_discovery", 00:07:05.993 "bdev_nvme_start_discovery", 00:07:05.993 "bdev_nvme_get_controller_health_info", 00:07:05.993 "bdev_nvme_disable_controller", 00:07:05.993 "bdev_nvme_enable_controller", 00:07:05.993 "bdev_nvme_reset_controller", 00:07:05.993 "bdev_nvme_get_transport_statistics", 00:07:05.993 "bdev_nvme_apply_firmware", 00:07:05.993 "bdev_nvme_detach_controller", 00:07:05.993 "bdev_nvme_get_controllers", 00:07:05.993 "bdev_nvme_attach_controller", 00:07:05.993 "bdev_nvme_set_hotplug", 00:07:05.993 "bdev_nvme_set_options", 00:07:05.993 "bdev_passthru_delete", 00:07:05.993 "bdev_passthru_create", 00:07:05.993 "bdev_lvol_set_parent_bdev", 00:07:05.993 "bdev_lvol_set_parent", 00:07:05.993 "bdev_lvol_check_shallow_copy", 00:07:05.993 "bdev_lvol_start_shallow_copy", 00:07:05.993 "bdev_lvol_grow_lvstore", 00:07:05.993 "bdev_lvol_get_lvols", 00:07:05.993 "bdev_lvol_get_lvstores", 00:07:05.993 "bdev_lvol_delete", 00:07:05.993 "bdev_lvol_set_read_only", 00:07:05.993 "bdev_lvol_resize", 00:07:05.993 "bdev_lvol_decouple_parent", 00:07:05.993 "bdev_lvol_inflate", 00:07:05.993 "bdev_lvol_rename", 00:07:05.993 "bdev_lvol_clone_bdev", 00:07:05.993 "bdev_lvol_clone", 00:07:05.993 "bdev_lvol_snapshot", 00:07:05.993 "bdev_lvol_create", 00:07:05.993 "bdev_lvol_delete_lvstore", 00:07:05.993 "bdev_lvol_rename_lvstore", 00:07:05.993 "bdev_lvol_create_lvstore", 00:07:05.993 "bdev_raid_set_options", 00:07:05.993 "bdev_raid_remove_base_bdev", 00:07:05.993 "bdev_raid_add_base_bdev", 00:07:05.993 "bdev_raid_delete", 00:07:05.993 "bdev_raid_create", 00:07:05.993 "bdev_raid_get_bdevs", 00:07:05.993 "bdev_error_inject_error", 00:07:05.993 "bdev_error_delete", 00:07:05.993 "bdev_error_create", 00:07:05.993 "bdev_split_delete", 00:07:05.993 "bdev_split_create", 00:07:05.993 "bdev_delay_delete", 00:07:05.993 "bdev_delay_create", 00:07:05.993 "bdev_delay_update_latency", 00:07:05.993 "bdev_zone_block_delete", 00:07:05.993 "bdev_zone_block_create", 00:07:05.993 "blobfs_create", 00:07:05.993 "blobfs_detect", 00:07:05.993 "blobfs_set_cache_size", 00:07:05.993 "bdev_xnvme_delete", 00:07:05.993 "bdev_xnvme_create", 00:07:05.993 "bdev_aio_delete", 00:07:05.993 "bdev_aio_rescan", 00:07:05.993 "bdev_aio_create", 00:07:05.993 "bdev_ftl_set_property", 00:07:05.993 "bdev_ftl_get_properties", 00:07:05.993 "bdev_ftl_get_stats", 00:07:05.993 "bdev_ftl_unmap", 00:07:05.993 "bdev_ftl_unload", 00:07:05.993 "bdev_ftl_delete", 00:07:05.993 "bdev_ftl_load", 00:07:05.993 "bdev_ftl_create", 00:07:05.993 "bdev_virtio_attach_controller", 00:07:05.993 "bdev_virtio_scsi_get_devices", 00:07:05.993 "bdev_virtio_detach_controller", 00:07:05.993 "bdev_virtio_blk_set_hotplug", 00:07:05.993 "bdev_iscsi_delete", 00:07:05.993 "bdev_iscsi_create", 00:07:05.993 "bdev_iscsi_set_options", 00:07:05.993 "accel_error_inject_error", 00:07:05.993 "ioat_scan_accel_module", 00:07:05.993 "dsa_scan_accel_module", 00:07:05.993 "iaa_scan_accel_module", 00:07:05.993 "keyring_file_remove_key", 00:07:05.993 "keyring_file_add_key", 00:07:05.993 "keyring_linux_set_options", 00:07:05.993 "fsdev_aio_delete", 00:07:05.993 "fsdev_aio_create", 00:07:05.993 "iscsi_get_histogram", 00:07:05.993 "iscsi_enable_histogram", 00:07:05.993 "iscsi_set_options", 00:07:05.993 "iscsi_get_auth_groups", 00:07:05.993 "iscsi_auth_group_remove_secret", 00:07:05.993 "iscsi_auth_group_add_secret", 00:07:05.993 "iscsi_delete_auth_group", 00:07:05.993 "iscsi_create_auth_group", 00:07:05.993 "iscsi_set_discovery_auth", 00:07:05.993 "iscsi_get_options", 00:07:05.993 "iscsi_target_node_request_logout", 00:07:05.993 "iscsi_target_node_set_redirect", 00:07:05.993 "iscsi_target_node_set_auth", 00:07:05.993 "iscsi_target_node_add_lun", 00:07:05.993 "iscsi_get_stats", 00:07:05.993 "iscsi_get_connections", 00:07:05.993 "iscsi_portal_group_set_auth", 00:07:05.993 "iscsi_start_portal_group", 00:07:05.993 "iscsi_delete_portal_group", 00:07:05.993 "iscsi_create_portal_group", 00:07:05.993 "iscsi_get_portal_groups", 00:07:05.993 "iscsi_delete_target_node", 00:07:05.993 "iscsi_target_node_remove_pg_ig_maps", 00:07:05.993 "iscsi_target_node_add_pg_ig_maps", 00:07:05.993 "iscsi_create_target_node", 00:07:05.993 "iscsi_get_target_nodes", 00:07:05.993 "iscsi_delete_initiator_group", 00:07:05.993 "iscsi_initiator_group_remove_initiators", 00:07:05.993 "iscsi_initiator_group_add_initiators", 00:07:05.993 "iscsi_create_initiator_group", 00:07:05.993 "iscsi_get_initiator_groups", 00:07:05.993 "nvmf_set_crdt", 00:07:05.993 "nvmf_set_config", 00:07:05.993 "nvmf_set_max_subsystems", 00:07:05.993 "nvmf_stop_mdns_prr", 00:07:05.993 "nvmf_publish_mdns_prr", 00:07:05.993 "nvmf_subsystem_get_listeners", 00:07:05.993 "nvmf_subsystem_get_qpairs", 00:07:05.993 "nvmf_subsystem_get_controllers", 00:07:05.993 "nvmf_get_stats", 00:07:05.993 "nvmf_get_transports", 00:07:05.993 "nvmf_create_transport", 00:07:05.993 "nvmf_get_targets", 00:07:05.993 "nvmf_delete_target", 00:07:05.993 "nvmf_create_target", 00:07:05.993 "nvmf_subsystem_allow_any_host", 00:07:05.993 "nvmf_subsystem_set_keys", 00:07:05.993 "nvmf_subsystem_remove_host", 00:07:05.993 "nvmf_subsystem_add_host", 00:07:05.993 "nvmf_ns_remove_host", 00:07:05.993 "nvmf_ns_add_host", 00:07:05.993 "nvmf_subsystem_remove_ns", 00:07:05.993 "nvmf_subsystem_set_ns_ana_group", 00:07:05.993 "nvmf_subsystem_add_ns", 00:07:05.993 "nvmf_subsystem_listener_set_ana_state", 00:07:05.993 "nvmf_discovery_get_referrals", 00:07:05.993 "nvmf_discovery_remove_referral", 00:07:05.993 "nvmf_discovery_add_referral", 00:07:05.993 "nvmf_subsystem_remove_listener", 00:07:05.993 "nvmf_subsystem_add_listener", 00:07:05.993 "nvmf_delete_subsystem", 00:07:05.993 "nvmf_create_subsystem", 00:07:05.993 "nvmf_get_subsystems", 00:07:05.993 "env_dpdk_get_mem_stats", 00:07:05.993 "nbd_get_disks", 00:07:05.993 "nbd_stop_disk", 00:07:05.993 "nbd_start_disk", 00:07:05.993 "ublk_recover_disk", 00:07:05.993 "ublk_get_disks", 00:07:05.993 "ublk_stop_disk", 00:07:05.993 "ublk_start_disk", 00:07:05.993 "ublk_destroy_target", 00:07:05.993 "ublk_create_target", 00:07:05.993 "virtio_blk_create_transport", 00:07:05.993 "virtio_blk_get_transports", 00:07:05.993 "vhost_controller_set_coalescing", 00:07:05.993 "vhost_get_controllers", 00:07:05.993 "vhost_delete_controller", 00:07:05.993 "vhost_create_blk_controller", 00:07:05.993 "vhost_scsi_controller_remove_target", 00:07:05.993 "vhost_scsi_controller_add_target", 00:07:05.993 "vhost_start_scsi_controller", 00:07:05.993 "vhost_create_scsi_controller", 00:07:05.993 "thread_set_cpumask", 00:07:05.993 "scheduler_set_options", 00:07:05.993 "framework_get_governor", 00:07:05.993 "framework_get_scheduler", 00:07:05.993 "framework_set_scheduler", 00:07:05.993 "framework_get_reactors", 00:07:05.993 "thread_get_io_channels", 00:07:05.993 "thread_get_pollers", 00:07:05.993 "thread_get_stats", 00:07:05.993 "framework_monitor_context_switch", 00:07:05.993 "spdk_kill_instance", 00:07:05.993 "log_enable_timestamps", 00:07:05.993 "log_get_flags", 00:07:05.993 "log_clear_flag", 00:07:05.993 "log_set_flag", 00:07:05.993 "log_get_level", 00:07:05.993 "log_set_level", 00:07:05.993 "log_get_print_level", 00:07:05.993 "log_set_print_level", 00:07:05.993 "framework_enable_cpumask_locks", 00:07:05.993 "framework_disable_cpumask_locks", 00:07:05.993 "framework_wait_init", 00:07:05.993 "framework_start_init", 00:07:05.993 "scsi_get_devices", 00:07:05.993 "bdev_get_histogram", 00:07:05.993 "bdev_enable_histogram", 00:07:05.993 "bdev_set_qos_limit", 00:07:05.993 "bdev_set_qd_sampling_period", 00:07:05.993 "bdev_get_bdevs", 00:07:05.993 "bdev_reset_iostat", 00:07:05.993 "bdev_get_iostat", 00:07:05.993 "bdev_examine", 00:07:05.993 "bdev_wait_for_examine", 00:07:05.993 "bdev_set_options", 00:07:05.993 "accel_get_stats", 00:07:05.993 "accel_set_options", 00:07:05.993 "accel_set_driver", 00:07:05.993 "accel_crypto_key_destroy", 00:07:05.993 "accel_crypto_keys_get", 00:07:05.993 "accel_crypto_key_create", 00:07:05.993 "accel_assign_opc", 00:07:05.993 "accel_get_module_info", 00:07:05.993 "accel_get_opc_assignments", 00:07:05.993 "vmd_rescan", 00:07:05.993 "vmd_remove_device", 00:07:05.993 "vmd_enable", 00:07:05.993 "sock_get_default_impl", 00:07:05.993 "sock_set_default_impl", 00:07:05.993 "sock_impl_set_options", 00:07:05.993 "sock_impl_get_options", 00:07:05.993 "iobuf_get_stats", 00:07:05.993 "iobuf_set_options", 00:07:05.993 "keyring_get_keys", 00:07:05.994 "framework_get_pci_devices", 00:07:05.994 "framework_get_config", 00:07:05.994 "framework_get_subsystems", 00:07:05.994 "fsdev_set_opts", 00:07:05.994 "fsdev_get_opts", 00:07:05.994 "trace_get_info", 00:07:05.994 "trace_get_tpoint_group_mask", 00:07:05.994 "trace_disable_tpoint_group", 00:07:05.994 "trace_enable_tpoint_group", 00:07:05.994 "trace_clear_tpoint_mask", 00:07:05.994 "trace_set_tpoint_mask", 00:07:05.994 "notify_get_notifications", 00:07:05.994 "notify_get_types", 00:07:05.994 "spdk_get_version", 00:07:05.994 "rpc_get_methods" 00:07:05.994 ] 00:07:05.994 21:38:13 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:05.994 21:38:13 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:05.994 21:38:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:05.994 21:38:13 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:05.994 21:38:13 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 60098 00:07:05.994 21:38:13 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 60098 ']' 00:07:05.994 21:38:13 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 60098 00:07:05.994 21:38:13 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:07:06.252 21:38:13 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.252 21:38:13 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60098 00:07:06.252 21:38:13 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.252 21:38:13 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.252 killing process with pid 60098 00:07:06.252 21:38:13 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60098' 00:07:06.252 21:38:13 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 60098 00:07:06.252 21:38:13 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 60098 00:07:08.783 00:07:08.783 real 0m4.654s 00:07:08.783 user 0m8.110s 00:07:08.783 sys 0m0.852s 00:07:08.783 21:38:16 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.783 21:38:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:08.783 ************************************ 00:07:08.783 END TEST spdkcli_tcp 00:07:08.783 ************************************ 00:07:08.783 21:38:16 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:08.783 21:38:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:08.783 21:38:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.783 21:38:16 -- common/autotest_common.sh@10 -- # set +x 00:07:08.783 ************************************ 00:07:08.783 START TEST dpdk_mem_utility 00:07:08.783 ************************************ 00:07:08.783 21:38:16 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:09.042 * Looking for test storage... 00:07:09.042 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:09.042 21:38:16 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:09.042 21:38:16 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:07:09.042 21:38:16 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:09.042 21:38:16 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:09.042 21:38:16 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:09.042 21:38:16 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:09.042 21:38:16 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:09.042 21:38:16 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.042 21:38:16 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:09.042 21:38:16 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:09.042 21:38:16 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:09.042 21:38:16 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:09.042 21:38:16 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:09.042 21:38:16 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:09.042 21:38:16 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:09.042 21:38:16 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:09.042 21:38:16 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:09.042 21:38:16 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:09.042 21:38:16 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.042 21:38:16 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:09.042 21:38:16 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:09.042 21:38:16 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.042 21:38:16 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:09.042 21:38:16 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:09.042 21:38:16 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:09.042 21:38:16 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:09.042 21:38:16 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.042 21:38:16 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:09.042 21:38:16 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:09.042 21:38:16 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:09.042 21:38:16 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:09.042 21:38:16 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:09.042 21:38:16 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.042 21:38:16 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:09.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.042 --rc genhtml_branch_coverage=1 00:07:09.042 --rc genhtml_function_coverage=1 00:07:09.042 --rc genhtml_legend=1 00:07:09.042 --rc geninfo_all_blocks=1 00:07:09.042 --rc geninfo_unexecuted_blocks=1 00:07:09.042 00:07:09.042 ' 00:07:09.042 21:38:16 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:09.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.042 --rc genhtml_branch_coverage=1 00:07:09.042 --rc genhtml_function_coverage=1 00:07:09.042 --rc genhtml_legend=1 00:07:09.042 --rc geninfo_all_blocks=1 00:07:09.042 --rc geninfo_unexecuted_blocks=1 00:07:09.042 00:07:09.042 ' 00:07:09.042 21:38:16 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:09.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.042 --rc genhtml_branch_coverage=1 00:07:09.042 --rc genhtml_function_coverage=1 00:07:09.042 --rc genhtml_legend=1 00:07:09.042 --rc geninfo_all_blocks=1 00:07:09.042 --rc geninfo_unexecuted_blocks=1 00:07:09.042 00:07:09.042 ' 00:07:09.042 21:38:16 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:09.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.042 --rc genhtml_branch_coverage=1 00:07:09.042 --rc genhtml_function_coverage=1 00:07:09.042 --rc genhtml_legend=1 00:07:09.042 --rc geninfo_all_blocks=1 00:07:09.042 --rc geninfo_unexecuted_blocks=1 00:07:09.042 00:07:09.042 ' 00:07:09.042 21:38:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:09.042 21:38:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=60231 00:07:09.042 21:38:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:09.042 21:38:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 60231 00:07:09.042 21:38:16 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 60231 ']' 00:07:09.042 21:38:16 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.042 21:38:16 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.042 21:38:16 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.042 21:38:16 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.042 21:38:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:09.332 [2024-12-10 21:38:16.799208] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:07:09.332 [2024-12-10 21:38:16.799353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60231 ] 00:07:09.332 [2024-12-10 21:38:16.983017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.590 [2024-12-10 21:38:17.133571] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.524 21:38:18 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.524 21:38:18 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:07:10.524 21:38:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:10.524 21:38:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:10.524 21:38:18 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.524 21:38:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:10.524 { 00:07:10.524 "filename": "/tmp/spdk_mem_dump.txt" 00:07:10.524 } 00:07:10.524 21:38:18 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.524 21:38:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:10.785 DPDK memory size 824.000000 MiB in 1 heap(s) 00:07:10.785 1 heaps totaling size 824.000000 MiB 00:07:10.785 size: 824.000000 MiB heap id: 0 00:07:10.785 end heaps---------- 00:07:10.785 9 mempools totaling size 603.782043 MiB 00:07:10.785 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:10.785 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:10.785 size: 100.555481 MiB name: bdev_io_60231 00:07:10.785 size: 50.003479 MiB name: msgpool_60231 00:07:10.785 size: 36.509338 MiB name: fsdev_io_60231 00:07:10.785 size: 21.763794 MiB name: PDU_Pool 00:07:10.785 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:10.785 size: 4.133484 MiB name: evtpool_60231 00:07:10.785 size: 0.026123 MiB name: Session_Pool 00:07:10.785 end mempools------- 00:07:10.785 6 memzones totaling size 4.142822 MiB 00:07:10.785 size: 1.000366 MiB name: RG_ring_0_60231 00:07:10.785 size: 1.000366 MiB name: RG_ring_1_60231 00:07:10.785 size: 1.000366 MiB name: RG_ring_4_60231 00:07:10.785 size: 1.000366 MiB name: RG_ring_5_60231 00:07:10.785 size: 0.125366 MiB name: RG_ring_2_60231 00:07:10.785 size: 0.015991 MiB name: RG_ring_3_60231 00:07:10.785 end memzones------- 00:07:10.785 21:38:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:10.785 heap id: 0 total size: 824.000000 MiB number of busy elements: 313 number of free elements: 18 00:07:10.785 list of free elements. size: 16.781860 MiB 00:07:10.786 element at address: 0x200006400000 with size: 1.995972 MiB 00:07:10.786 element at address: 0x20000a600000 with size: 1.995972 MiB 00:07:10.786 element at address: 0x200003e00000 with size: 1.991028 MiB 00:07:10.786 element at address: 0x200019500040 with size: 0.999939 MiB 00:07:10.786 element at address: 0x200019900040 with size: 0.999939 MiB 00:07:10.786 element at address: 0x200019a00000 with size: 0.999084 MiB 00:07:10.786 element at address: 0x200032600000 with size: 0.994324 MiB 00:07:10.786 element at address: 0x200000400000 with size: 0.992004 MiB 00:07:10.786 element at address: 0x200019200000 with size: 0.959656 MiB 00:07:10.786 element at address: 0x200019d00040 with size: 0.936401 MiB 00:07:10.786 element at address: 0x200000200000 with size: 0.716980 MiB 00:07:10.786 element at address: 0x20001b400000 with size: 0.563171 MiB 00:07:10.786 element at address: 0x200000c00000 with size: 0.489197 MiB 00:07:10.786 element at address: 0x200019600000 with size: 0.487976 MiB 00:07:10.786 element at address: 0x200019e00000 with size: 0.485413 MiB 00:07:10.786 element at address: 0x200012c00000 with size: 0.433472 MiB 00:07:10.786 element at address: 0x200028800000 with size: 0.390442 MiB 00:07:10.786 element at address: 0x200000800000 with size: 0.350891 MiB 00:07:10.786 list of standard malloc elements. size: 199.287231 MiB 00:07:10.786 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:07:10.786 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:07:10.786 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:07:10.786 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:07:10.786 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:07:10.786 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:07:10.786 element at address: 0x200019deff40 with size: 0.062683 MiB 00:07:10.786 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:07:10.786 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:07:10.786 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:07:10.786 element at address: 0x200012bff040 with size: 0.000305 MiB 00:07:10.786 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:07:10.786 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:07:10.786 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:07:10.786 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:07:10.786 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:07:10.786 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:07:10.786 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:07:10.786 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:07:10.786 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:07:10.786 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:07:10.786 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:07:10.786 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:07:10.786 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:07:10.786 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:07:10.786 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:07:10.786 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:07:10.786 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:07:10.786 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:07:10.786 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:07:10.786 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:07:10.786 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:07:10.786 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:07:10.786 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:07:10.786 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:07:10.786 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:07:10.786 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:07:10.786 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:07:10.786 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:07:10.786 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:07:10.786 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:07:10.786 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200000cff000 with size: 0.000244 MiB 00:07:10.786 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:07:10.786 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:07:10.786 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:07:10.786 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:07:10.786 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:07:10.786 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:07:10.786 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:07:10.786 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:07:10.786 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:07:10.786 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:07:10.786 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:07:10.786 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:07:10.786 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:07:10.786 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200012bff180 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200012bff280 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200012bff380 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200012bff480 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200012bff580 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200012bff680 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200012bff780 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200012bff880 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200012bff980 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:07:10.786 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:07:10.786 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:07:10.786 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:07:10.787 element at address: 0x200019affc40 with size: 0.000244 MiB 00:07:10.787 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:07:10.787 element at address: 0x200028863f40 with size: 0.000244 MiB 00:07:10.787 element at address: 0x200028864040 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886af80 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886b080 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886b180 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886b280 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886b380 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886b480 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886b580 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886b680 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886b780 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886b880 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886b980 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886be80 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886c080 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886c180 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886c280 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886c380 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886c480 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886c580 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886c680 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886c780 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886c880 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886c980 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886d080 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886d180 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886d280 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886d380 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886d480 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886d580 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886d680 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886d780 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886d880 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886d980 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886da80 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886db80 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886de80 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886df80 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886e080 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886e180 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886e280 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886e380 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886e480 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886e580 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886e680 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886e780 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886e880 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886e980 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:07:10.787 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:07:10.788 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:07:10.788 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:07:10.788 element at address: 0x20002886f080 with size: 0.000244 MiB 00:07:10.788 element at address: 0x20002886f180 with size: 0.000244 MiB 00:07:10.788 element at address: 0x20002886f280 with size: 0.000244 MiB 00:07:10.788 element at address: 0x20002886f380 with size: 0.000244 MiB 00:07:10.788 element at address: 0x20002886f480 with size: 0.000244 MiB 00:07:10.788 element at address: 0x20002886f580 with size: 0.000244 MiB 00:07:10.788 element at address: 0x20002886f680 with size: 0.000244 MiB 00:07:10.788 element at address: 0x20002886f780 with size: 0.000244 MiB 00:07:10.788 element at address: 0x20002886f880 with size: 0.000244 MiB 00:07:10.788 element at address: 0x20002886f980 with size: 0.000244 MiB 00:07:10.788 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:07:10.788 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:07:10.788 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:07:10.788 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:07:10.788 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:07:10.788 list of memzone associated elements. size: 607.930908 MiB 00:07:10.788 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:07:10.788 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:10.788 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:07:10.788 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:10.788 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:07:10.788 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_60231_0 00:07:10.788 element at address: 0x200000dff340 with size: 48.003113 MiB 00:07:10.788 associated memzone info: size: 48.002930 MiB name: MP_msgpool_60231_0 00:07:10.788 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:07:10.788 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_60231_0 00:07:10.788 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:07:10.788 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:10.788 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:07:10.788 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:10.788 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:07:10.788 associated memzone info: size: 3.000122 MiB name: MP_evtpool_60231_0 00:07:10.788 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:07:10.788 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_60231 00:07:10.788 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:07:10.788 associated memzone info: size: 1.007996 MiB name: MP_evtpool_60231 00:07:10.788 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:07:10.788 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:10.788 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:07:10.788 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:10.788 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:07:10.788 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:10.788 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:07:10.788 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:10.788 element at address: 0x200000cff100 with size: 1.000549 MiB 00:07:10.788 associated memzone info: size: 1.000366 MiB name: RG_ring_0_60231 00:07:10.788 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:07:10.788 associated memzone info: size: 1.000366 MiB name: RG_ring_1_60231 00:07:10.788 element at address: 0x200019affd40 with size: 1.000549 MiB 00:07:10.788 associated memzone info: size: 1.000366 MiB name: RG_ring_4_60231 00:07:10.788 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:07:10.788 associated memzone info: size: 1.000366 MiB name: RG_ring_5_60231 00:07:10.788 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:07:10.788 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_60231 00:07:10.788 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:07:10.788 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_60231 00:07:10.788 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:07:10.788 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:10.788 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:07:10.788 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:10.788 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:07:10.788 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:10.788 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:07:10.788 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_60231 00:07:10.788 element at address: 0x20000085df80 with size: 0.125549 MiB 00:07:10.788 associated memzone info: size: 0.125366 MiB name: RG_ring_2_60231 00:07:10.788 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:07:10.788 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:10.788 element at address: 0x200028864140 with size: 0.023804 MiB 00:07:10.788 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:10.788 element at address: 0x200000859d40 with size: 0.016174 MiB 00:07:10.788 associated memzone info: size: 0.015991 MiB name: RG_ring_3_60231 00:07:10.788 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:07:10.788 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:10.788 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:07:10.788 associated memzone info: size: 0.000183 MiB name: MP_msgpool_60231 00:07:10.788 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:07:10.788 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_60231 00:07:10.788 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:07:10.788 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_60231 00:07:10.788 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:07:10.788 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:10.788 21:38:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:10.788 21:38:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 60231 00:07:10.788 21:38:18 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 60231 ']' 00:07:10.788 21:38:18 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 60231 00:07:10.788 21:38:18 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:07:10.788 21:38:18 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:10.788 21:38:18 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60231 00:07:10.788 21:38:18 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:10.788 killing process with pid 60231 00:07:10.788 21:38:18 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:10.788 21:38:18 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60231' 00:07:10.788 21:38:18 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 60231 00:07:10.788 21:38:18 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 60231 00:07:13.321 00:07:13.321 real 0m4.517s 00:07:13.321 user 0m4.282s 00:07:13.321 sys 0m0.766s 00:07:13.321 21:38:20 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.321 21:38:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:13.321 ************************************ 00:07:13.321 END TEST dpdk_mem_utility 00:07:13.321 ************************************ 00:07:13.321 21:38:21 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:13.321 21:38:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:13.321 21:38:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.321 21:38:21 -- common/autotest_common.sh@10 -- # set +x 00:07:13.321 ************************************ 00:07:13.321 START TEST event 00:07:13.321 ************************************ 00:07:13.321 21:38:21 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:13.579 * Looking for test storage... 00:07:13.579 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:13.579 21:38:21 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:13.579 21:38:21 event -- common/autotest_common.sh@1711 -- # lcov --version 00:07:13.579 21:38:21 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:13.579 21:38:21 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:13.579 21:38:21 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:13.579 21:38:21 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:13.579 21:38:21 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:13.579 21:38:21 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.579 21:38:21 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:13.579 21:38:21 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:13.579 21:38:21 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:13.579 21:38:21 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:13.579 21:38:21 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:13.579 21:38:21 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:13.579 21:38:21 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:13.579 21:38:21 event -- scripts/common.sh@344 -- # case "$op" in 00:07:13.579 21:38:21 event -- scripts/common.sh@345 -- # : 1 00:07:13.579 21:38:21 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:13.579 21:38:21 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.579 21:38:21 event -- scripts/common.sh@365 -- # decimal 1 00:07:13.579 21:38:21 event -- scripts/common.sh@353 -- # local d=1 00:07:13.579 21:38:21 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.579 21:38:21 event -- scripts/common.sh@355 -- # echo 1 00:07:13.579 21:38:21 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.579 21:38:21 event -- scripts/common.sh@366 -- # decimal 2 00:07:13.579 21:38:21 event -- scripts/common.sh@353 -- # local d=2 00:07:13.579 21:38:21 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.579 21:38:21 event -- scripts/common.sh@355 -- # echo 2 00:07:13.579 21:38:21 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.579 21:38:21 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.579 21:38:21 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.579 21:38:21 event -- scripts/common.sh@368 -- # return 0 00:07:13.579 21:38:21 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.579 21:38:21 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:13.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.579 --rc genhtml_branch_coverage=1 00:07:13.579 --rc genhtml_function_coverage=1 00:07:13.579 --rc genhtml_legend=1 00:07:13.579 --rc geninfo_all_blocks=1 00:07:13.579 --rc geninfo_unexecuted_blocks=1 00:07:13.579 00:07:13.579 ' 00:07:13.579 21:38:21 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:13.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.579 --rc genhtml_branch_coverage=1 00:07:13.579 --rc genhtml_function_coverage=1 00:07:13.579 --rc genhtml_legend=1 00:07:13.579 --rc geninfo_all_blocks=1 00:07:13.579 --rc geninfo_unexecuted_blocks=1 00:07:13.579 00:07:13.579 ' 00:07:13.579 21:38:21 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:13.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.579 --rc genhtml_branch_coverage=1 00:07:13.579 --rc genhtml_function_coverage=1 00:07:13.579 --rc genhtml_legend=1 00:07:13.579 --rc geninfo_all_blocks=1 00:07:13.579 --rc geninfo_unexecuted_blocks=1 00:07:13.579 00:07:13.579 ' 00:07:13.579 21:38:21 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:13.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.579 --rc genhtml_branch_coverage=1 00:07:13.579 --rc genhtml_function_coverage=1 00:07:13.579 --rc genhtml_legend=1 00:07:13.579 --rc geninfo_all_blocks=1 00:07:13.579 --rc geninfo_unexecuted_blocks=1 00:07:13.579 00:07:13.579 ' 00:07:13.579 21:38:21 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:13.579 21:38:21 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:13.579 21:38:21 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:13.579 21:38:21 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:13.579 21:38:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.579 21:38:21 event -- common/autotest_common.sh@10 -- # set +x 00:07:13.579 ************************************ 00:07:13.579 START TEST event_perf 00:07:13.579 ************************************ 00:07:13.579 21:38:21 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:13.855 Running I/O for 1 seconds...[2024-12-10 21:38:21.339169] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:07:13.855 [2024-12-10 21:38:21.339284] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60339 ] 00:07:13.855 [2024-12-10 21:38:21.513350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:14.114 [2024-12-10 21:38:21.666018] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.114 [2024-12-10 21:38:21.666168] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.114 Running I/O for 1 seconds...[2024-12-10 21:38:21.666934] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.114 [2024-12-10 21:38:21.666952] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:15.490 00:07:15.490 lcore 0: 196534 00:07:15.490 lcore 1: 196534 00:07:15.490 lcore 2: 196533 00:07:15.490 lcore 3: 196533 00:07:15.490 done. 00:07:15.490 00:07:15.490 real 0m1.649s 00:07:15.490 user 0m4.374s 00:07:15.490 sys 0m0.151s 00:07:15.490 21:38:22 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.490 21:38:22 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:15.490 ************************************ 00:07:15.490 END TEST event_perf 00:07:15.490 ************************************ 00:07:15.490 21:38:22 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:15.490 21:38:22 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:15.490 21:38:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.490 21:38:22 event -- common/autotest_common.sh@10 -- # set +x 00:07:15.490 ************************************ 00:07:15.490 START TEST event_reactor 00:07:15.490 ************************************ 00:07:15.490 21:38:23 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:15.490 [2024-12-10 21:38:23.054776] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:07:15.490 [2024-12-10 21:38:23.054896] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60384 ] 00:07:15.749 [2024-12-10 21:38:23.237045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.749 [2024-12-10 21:38:23.372709] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.126 test_start 00:07:17.126 oneshot 00:07:17.126 tick 100 00:07:17.126 tick 100 00:07:17.126 tick 250 00:07:17.126 tick 100 00:07:17.126 tick 100 00:07:17.126 tick 100 00:07:17.126 tick 250 00:07:17.126 tick 500 00:07:17.126 tick 100 00:07:17.126 tick 100 00:07:17.126 tick 250 00:07:17.126 tick 100 00:07:17.126 tick 100 00:07:17.126 test_end 00:07:17.126 00:07:17.126 real 0m1.604s 00:07:17.126 user 0m1.389s 00:07:17.126 sys 0m0.107s 00:07:17.126 21:38:24 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.126 21:38:24 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:17.126 ************************************ 00:07:17.126 END TEST event_reactor 00:07:17.126 ************************************ 00:07:17.126 21:38:24 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:17.126 21:38:24 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:17.126 21:38:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.126 21:38:24 event -- common/autotest_common.sh@10 -- # set +x 00:07:17.126 ************************************ 00:07:17.126 START TEST event_reactor_perf 00:07:17.126 ************************************ 00:07:17.126 21:38:24 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:17.126 [2024-12-10 21:38:24.727437] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:07:17.126 [2024-12-10 21:38:24.727569] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60415 ] 00:07:17.384 [2024-12-10 21:38:24.909660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.384 [2024-12-10 21:38:25.055710] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.810 test_start 00:07:18.810 test_end 00:07:18.810 Performance: 379421 events per second 00:07:18.810 00:07:18.810 real 0m1.635s 00:07:18.810 user 0m1.415s 00:07:18.810 sys 0m0.112s 00:07:18.810 21:38:26 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.810 21:38:26 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:18.810 ************************************ 00:07:18.810 END TEST event_reactor_perf 00:07:18.810 ************************************ 00:07:18.810 21:38:26 event -- event/event.sh@49 -- # uname -s 00:07:18.810 21:38:26 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:18.810 21:38:26 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:18.810 21:38:26 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.810 21:38:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.810 21:38:26 event -- common/autotest_common.sh@10 -- # set +x 00:07:18.810 ************************************ 00:07:18.810 START TEST event_scheduler 00:07:18.810 ************************************ 00:07:18.810 21:38:26 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:18.810 * Looking for test storage... 00:07:18.810 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:18.810 21:38:26 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:18.810 21:38:26 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:07:18.810 21:38:26 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:19.068 21:38:26 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:19.068 21:38:26 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:19.068 21:38:26 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:19.068 21:38:26 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:19.068 21:38:26 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:19.068 21:38:26 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:19.068 21:38:26 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:19.068 21:38:26 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:19.068 21:38:26 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:19.068 21:38:26 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:19.068 21:38:26 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:19.068 21:38:26 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:19.069 21:38:26 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:19.069 21:38:26 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:19.069 21:38:26 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:19.069 21:38:26 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:19.069 21:38:26 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:19.069 21:38:26 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:19.069 21:38:26 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:19.069 21:38:26 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:19.069 21:38:26 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:19.069 21:38:26 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:19.069 21:38:26 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:19.069 21:38:26 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.069 21:38:26 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:19.069 21:38:26 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:19.069 21:38:26 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:19.069 21:38:26 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:19.069 21:38:26 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:19.069 21:38:26 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:19.069 21:38:26 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:19.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.069 --rc genhtml_branch_coverage=1 00:07:19.069 --rc genhtml_function_coverage=1 00:07:19.069 --rc genhtml_legend=1 00:07:19.069 --rc geninfo_all_blocks=1 00:07:19.069 --rc geninfo_unexecuted_blocks=1 00:07:19.069 00:07:19.069 ' 00:07:19.069 21:38:26 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:19.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.069 --rc genhtml_branch_coverage=1 00:07:19.069 --rc genhtml_function_coverage=1 00:07:19.069 --rc genhtml_legend=1 00:07:19.069 --rc geninfo_all_blocks=1 00:07:19.069 --rc geninfo_unexecuted_blocks=1 00:07:19.069 00:07:19.069 ' 00:07:19.069 21:38:26 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:19.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.069 --rc genhtml_branch_coverage=1 00:07:19.069 --rc genhtml_function_coverage=1 00:07:19.069 --rc genhtml_legend=1 00:07:19.069 --rc geninfo_all_blocks=1 00:07:19.069 --rc geninfo_unexecuted_blocks=1 00:07:19.069 00:07:19.069 ' 00:07:19.069 21:38:26 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:19.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.069 --rc genhtml_branch_coverage=1 00:07:19.069 --rc genhtml_function_coverage=1 00:07:19.069 --rc genhtml_legend=1 00:07:19.069 --rc geninfo_all_blocks=1 00:07:19.069 --rc geninfo_unexecuted_blocks=1 00:07:19.069 00:07:19.069 ' 00:07:19.069 21:38:26 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:19.069 21:38:26 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:19.069 21:38:26 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60491 00:07:19.069 21:38:26 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:19.069 21:38:26 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60491 00:07:19.069 21:38:26 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 60491 ']' 00:07:19.069 21:38:26 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.069 21:38:26 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.069 21:38:26 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.069 21:38:26 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.069 21:38:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:19.069 [2024-12-10 21:38:26.690554] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:07:19.069 [2024-12-10 21:38:26.690759] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60491 ] 00:07:19.327 [2024-12-10 21:38:26.878364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:19.327 [2024-12-10 21:38:27.046525] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.327 [2024-12-10 21:38:27.046565] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.327 [2024-12-10 21:38:27.046595] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.327 [2024-12-10 21:38:27.046602] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:20.263 21:38:27 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.263 21:38:27 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:07:20.263 21:38:27 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:20.263 21:38:27 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.263 21:38:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:20.263 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:20.263 POWER: Cannot set governor of lcore 0 to userspace 00:07:20.263 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:20.263 POWER: Cannot set governor of lcore 0 to performance 00:07:20.263 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:20.263 POWER: Cannot set governor of lcore 0 to userspace 00:07:20.263 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:20.263 POWER: Cannot set governor of lcore 0 to userspace 00:07:20.263 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:07:20.263 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:20.263 POWER: Unable to set Power Management Environment for lcore 0 00:07:20.263 [2024-12-10 21:38:27.752711] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:07:20.263 [2024-12-10 21:38:27.752741] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:07:20.263 [2024-12-10 21:38:27.752755] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:20.263 [2024-12-10 21:38:27.752778] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:20.263 [2024-12-10 21:38:27.752793] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:20.263 [2024-12-10 21:38:27.752807] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:20.263 21:38:27 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.263 21:38:27 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:20.263 21:38:27 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.263 21:38:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:20.521 [2024-12-10 21:38:28.184514] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:20.521 21:38:28 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.521 21:38:28 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:20.521 21:38:28 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:20.521 21:38:28 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.521 21:38:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:20.521 ************************************ 00:07:20.521 START TEST scheduler_create_thread 00:07:20.521 ************************************ 00:07:20.521 21:38:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:07:20.521 21:38:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:20.521 21:38:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.521 21:38:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:20.521 2 00:07:20.521 21:38:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.521 21:38:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:20.521 21:38:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.521 21:38:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:20.521 3 00:07:20.521 21:38:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.521 21:38:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:20.521 21:38:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.521 21:38:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:20.521 4 00:07:20.521 21:38:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.521 21:38:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:20.521 21:38:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.521 21:38:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:20.521 5 00:07:20.521 21:38:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.521 21:38:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:20.521 21:38:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.521 21:38:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:20.521 6 00:07:20.521 21:38:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.521 21:38:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:20.521 21:38:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.521 21:38:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:20.521 7 00:07:20.521 21:38:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.521 21:38:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:20.521 21:38:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.521 21:38:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:20.780 8 00:07:20.780 21:38:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.780 21:38:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:20.780 21:38:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.780 21:38:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:20.780 9 00:07:20.780 21:38:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.780 21:38:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:20.780 21:38:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.780 21:38:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:20.780 10 00:07:20.780 21:38:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.780 21:38:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:20.780 21:38:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.780 21:38:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:20.780 21:38:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.780 21:38:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:20.780 21:38:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:20.780 21:38:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.780 21:38:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:20.780 21:38:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.780 21:38:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:20.780 21:38:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.780 21:38:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:22.174 21:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.174 21:38:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:22.174 21:38:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:22.174 21:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.174 21:38:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:23.131 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.131 00:07:23.131 real 0m2.615s 00:07:23.131 user 0m0.018s 00:07:23.131 sys 0m0.004s 00:07:23.131 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.131 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:23.131 ************************************ 00:07:23.131 END TEST scheduler_create_thread 00:07:23.131 ************************************ 00:07:23.131 21:38:30 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:23.131 21:38:30 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60491 00:07:23.131 21:38:30 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 60491 ']' 00:07:23.131 21:38:30 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 60491 00:07:23.131 21:38:30 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:07:23.131 21:38:30 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.131 21:38:30 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60491 00:07:23.388 21:38:30 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:23.388 21:38:30 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:23.388 killing process with pid 60491 00:07:23.388 21:38:30 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60491' 00:07:23.388 21:38:30 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 60491 00:07:23.388 21:38:30 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 60491 00:07:23.646 [2024-12-10 21:38:31.187889] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:25.022 00:07:25.022 real 0m6.117s 00:07:25.022 user 0m13.149s 00:07:25.022 sys 0m0.608s 00:07:25.022 21:38:32 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.022 21:38:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:25.022 ************************************ 00:07:25.022 END TEST event_scheduler 00:07:25.022 ************************************ 00:07:25.022 21:38:32 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:25.022 21:38:32 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:25.022 21:38:32 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.022 21:38:32 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.022 21:38:32 event -- common/autotest_common.sh@10 -- # set +x 00:07:25.022 ************************************ 00:07:25.022 START TEST app_repeat 00:07:25.022 ************************************ 00:07:25.022 21:38:32 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:07:25.022 21:38:32 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.022 21:38:32 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:25.022 21:38:32 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:25.022 21:38:32 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:25.022 21:38:32 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:25.022 21:38:32 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:25.022 21:38:32 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:25.022 21:38:32 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60608 00:07:25.022 21:38:32 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:25.022 21:38:32 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:25.022 Process app_repeat pid: 60608 00:07:25.022 21:38:32 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60608' 00:07:25.022 21:38:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:25.022 spdk_app_start Round 0 00:07:25.022 21:38:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:25.022 21:38:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60608 /var/tmp/spdk-nbd.sock 00:07:25.022 21:38:32 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60608 ']' 00:07:25.022 21:38:32 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:25.022 21:38:32 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:25.022 21:38:32 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:25.022 21:38:32 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.022 21:38:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:25.022 [2024-12-10 21:38:32.640438] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:07:25.022 [2024-12-10 21:38:32.640560] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60608 ] 00:07:25.280 [2024-12-10 21:38:32.824919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:25.280 [2024-12-10 21:38:32.967136] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.280 [2024-12-10 21:38:32.967178] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.846 21:38:33 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.846 21:38:33 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:25.846 21:38:33 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:26.105 Malloc0 00:07:26.105 21:38:33 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:26.404 Malloc1 00:07:26.673 21:38:34 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:26.673 21:38:34 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.673 21:38:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:26.673 21:38:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:26.673 21:38:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:26.673 21:38:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:26.673 21:38:34 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:26.673 21:38:34 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.673 21:38:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:26.673 21:38:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:26.673 21:38:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:26.673 21:38:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:26.673 21:38:34 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:26.673 21:38:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:26.673 21:38:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:26.673 21:38:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:26.673 /dev/nbd0 00:07:26.673 21:38:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:26.673 21:38:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:26.673 21:38:34 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:26.673 21:38:34 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:26.673 21:38:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:26.673 21:38:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:26.673 21:38:34 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:26.673 21:38:34 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:26.673 21:38:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:26.673 21:38:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:26.673 21:38:34 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:26.673 1+0 records in 00:07:26.673 1+0 records out 00:07:26.673 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324088 s, 12.6 MB/s 00:07:26.673 21:38:34 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:26.673 21:38:34 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:26.673 21:38:34 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:26.673 21:38:34 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:26.673 21:38:34 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:26.673 21:38:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:26.673 21:38:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:26.674 21:38:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:26.932 /dev/nbd1 00:07:26.932 21:38:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:26.932 21:38:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:26.932 21:38:34 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:26.932 21:38:34 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:26.932 21:38:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:26.932 21:38:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:26.932 21:38:34 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:26.932 21:38:34 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:26.932 21:38:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:26.932 21:38:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:26.932 21:38:34 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:26.932 1+0 records in 00:07:26.932 1+0 records out 00:07:26.932 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024997 s, 16.4 MB/s 00:07:26.932 21:38:34 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:26.932 21:38:34 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:26.932 21:38:34 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:26.932 21:38:34 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:26.932 21:38:34 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:26.932 21:38:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:26.932 21:38:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:26.932 21:38:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:26.932 21:38:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.933 21:38:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:27.498 21:38:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:27.498 { 00:07:27.498 "nbd_device": "/dev/nbd0", 00:07:27.498 "bdev_name": "Malloc0" 00:07:27.498 }, 00:07:27.498 { 00:07:27.498 "nbd_device": "/dev/nbd1", 00:07:27.498 "bdev_name": "Malloc1" 00:07:27.498 } 00:07:27.498 ]' 00:07:27.498 21:38:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:27.498 { 00:07:27.498 "nbd_device": "/dev/nbd0", 00:07:27.498 "bdev_name": "Malloc0" 00:07:27.498 }, 00:07:27.498 { 00:07:27.498 "nbd_device": "/dev/nbd1", 00:07:27.498 "bdev_name": "Malloc1" 00:07:27.498 } 00:07:27.498 ]' 00:07:27.498 21:38:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:27.498 21:38:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:27.498 /dev/nbd1' 00:07:27.498 21:38:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:27.498 21:38:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:27.498 /dev/nbd1' 00:07:27.498 21:38:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:27.498 21:38:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:27.498 21:38:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:27.498 21:38:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:27.498 21:38:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:27.498 21:38:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:27.498 21:38:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:27.498 21:38:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:27.498 21:38:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:27.498 21:38:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:27.498 21:38:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:27.498 256+0 records in 00:07:27.498 256+0 records out 00:07:27.498 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125788 s, 83.4 MB/s 00:07:27.498 21:38:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:27.498 21:38:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:27.498 256+0 records in 00:07:27.498 256+0 records out 00:07:27.498 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0293421 s, 35.7 MB/s 00:07:27.498 21:38:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:27.498 21:38:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:27.498 256+0 records in 00:07:27.498 256+0 records out 00:07:27.498 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0355802 s, 29.5 MB/s 00:07:27.498 21:38:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:27.498 21:38:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:27.498 21:38:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:27.498 21:38:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:27.498 21:38:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:27.498 21:38:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:27.498 21:38:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:27.498 21:38:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:27.498 21:38:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:27.498 21:38:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:27.498 21:38:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:27.498 21:38:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:27.498 21:38:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:27.498 21:38:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:27.498 21:38:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:27.498 21:38:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:27.498 21:38:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:27.498 21:38:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.498 21:38:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:27.755 21:38:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:27.755 21:38:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:27.755 21:38:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:27.755 21:38:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.755 21:38:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.755 21:38:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:27.755 21:38:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:27.755 21:38:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.755 21:38:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.755 21:38:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:28.012 21:38:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:28.012 21:38:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:28.012 21:38:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:28.012 21:38:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:28.012 21:38:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:28.012 21:38:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:28.012 21:38:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:28.012 21:38:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:28.012 21:38:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:28.012 21:38:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.012 21:38:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:28.269 21:38:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:28.269 21:38:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:28.269 21:38:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:28.269 21:38:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:28.269 21:38:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:28.269 21:38:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:28.269 21:38:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:28.269 21:38:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:28.269 21:38:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:28.269 21:38:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:28.269 21:38:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:28.269 21:38:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:28.269 21:38:35 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:28.834 21:38:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:30.208 [2024-12-10 21:38:37.612330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:30.208 [2024-12-10 21:38:37.752983] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.208 [2024-12-10 21:38:37.752983] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.467 [2024-12-10 21:38:37.976417] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:30.467 [2024-12-10 21:38:37.976523] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:31.844 spdk_app_start Round 1 00:07:31.844 21:38:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:31.844 21:38:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:31.844 21:38:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60608 /var/tmp/spdk-nbd.sock 00:07:31.844 21:38:39 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60608 ']' 00:07:31.844 21:38:39 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:31.844 21:38:39 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:31.844 21:38:39 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:31.844 21:38:39 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.844 21:38:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:31.844 21:38:39 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.844 21:38:39 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:31.844 21:38:39 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:32.409 Malloc0 00:07:32.409 21:38:39 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:32.667 Malloc1 00:07:32.667 21:38:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:32.667 21:38:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.667 21:38:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:32.667 21:38:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:32.667 21:38:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:32.667 21:38:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:32.667 21:38:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:32.667 21:38:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.667 21:38:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:32.667 21:38:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:32.667 21:38:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:32.667 21:38:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:32.667 21:38:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:32.667 21:38:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:32.667 21:38:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:32.667 21:38:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:32.667 /dev/nbd0 00:07:32.667 21:38:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:32.926 21:38:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:32.926 21:38:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:32.926 21:38:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:32.926 21:38:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:32.926 21:38:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:32.926 21:38:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:32.926 21:38:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:32.926 21:38:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:32.926 21:38:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:32.926 21:38:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:32.926 1+0 records in 00:07:32.926 1+0 records out 00:07:32.926 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265283 s, 15.4 MB/s 00:07:32.926 21:38:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:32.926 21:38:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:32.926 21:38:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:32.926 21:38:40 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:32.926 21:38:40 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:32.926 21:38:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:32.926 21:38:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:32.926 21:38:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:32.926 /dev/nbd1 00:07:33.184 21:38:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:33.184 21:38:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:33.184 21:38:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:33.184 21:38:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:33.184 21:38:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:33.184 21:38:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:33.184 21:38:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:33.184 21:38:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:33.184 21:38:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:33.184 21:38:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:33.184 21:38:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:33.184 1+0 records in 00:07:33.184 1+0 records out 00:07:33.184 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447686 s, 9.1 MB/s 00:07:33.184 21:38:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:33.184 21:38:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:33.184 21:38:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:33.184 21:38:40 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:33.184 21:38:40 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:33.184 21:38:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:33.184 21:38:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:33.184 21:38:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:33.184 21:38:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.184 21:38:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:33.451 21:38:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:33.451 { 00:07:33.451 "nbd_device": "/dev/nbd0", 00:07:33.451 "bdev_name": "Malloc0" 00:07:33.451 }, 00:07:33.451 { 00:07:33.451 "nbd_device": "/dev/nbd1", 00:07:33.451 "bdev_name": "Malloc1" 00:07:33.451 } 00:07:33.451 ]' 00:07:33.451 21:38:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:33.451 21:38:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:33.451 { 00:07:33.451 "nbd_device": "/dev/nbd0", 00:07:33.451 "bdev_name": "Malloc0" 00:07:33.451 }, 00:07:33.451 { 00:07:33.451 "nbd_device": "/dev/nbd1", 00:07:33.451 "bdev_name": "Malloc1" 00:07:33.451 } 00:07:33.451 ]' 00:07:33.451 21:38:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:33.451 /dev/nbd1' 00:07:33.451 21:38:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:33.451 /dev/nbd1' 00:07:33.451 21:38:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:33.451 21:38:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:33.451 21:38:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:33.451 21:38:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:33.451 21:38:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:33.451 21:38:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:33.451 21:38:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:33.451 21:38:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:33.451 21:38:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:33.451 21:38:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:33.451 21:38:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:33.451 21:38:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:33.451 256+0 records in 00:07:33.451 256+0 records out 00:07:33.451 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0132143 s, 79.4 MB/s 00:07:33.451 21:38:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:33.451 21:38:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:33.451 256+0 records in 00:07:33.451 256+0 records out 00:07:33.451 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246174 s, 42.6 MB/s 00:07:33.451 21:38:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:33.451 21:38:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:33.451 256+0 records in 00:07:33.451 256+0 records out 00:07:33.451 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0368961 s, 28.4 MB/s 00:07:33.451 21:38:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:33.451 21:38:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:33.451 21:38:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:33.451 21:38:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:33.451 21:38:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:33.451 21:38:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:33.451 21:38:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:33.451 21:38:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:33.451 21:38:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:33.451 21:38:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:33.451 21:38:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:33.451 21:38:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:33.451 21:38:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:33.451 21:38:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.451 21:38:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:33.451 21:38:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:33.451 21:38:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:33.451 21:38:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:33.451 21:38:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:33.756 21:38:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:33.756 21:38:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:33.756 21:38:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:33.756 21:38:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:33.756 21:38:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:33.756 21:38:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:33.756 21:38:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:33.756 21:38:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:33.756 21:38:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:33.756 21:38:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:34.014 21:38:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:34.014 21:38:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:34.014 21:38:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:34.014 21:38:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:34.014 21:38:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:34.014 21:38:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:34.014 21:38:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:34.014 21:38:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:34.014 21:38:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:34.014 21:38:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.014 21:38:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:34.272 21:38:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:34.272 21:38:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:34.272 21:38:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:34.272 21:38:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:34.272 21:38:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:34.272 21:38:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:34.272 21:38:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:34.272 21:38:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:34.272 21:38:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:34.272 21:38:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:34.272 21:38:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:34.272 21:38:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:34.272 21:38:41 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:34.838 21:38:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:36.210 [2024-12-10 21:38:43.570373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:36.210 [2024-12-10 21:38:43.697759] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.210 [2024-12-10 21:38:43.697779] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.210 [2024-12-10 21:38:43.924781] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:36.210 [2024-12-10 21:38:43.924862] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:37.582 21:38:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:37.582 spdk_app_start Round 2 00:07:37.582 21:38:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:37.582 21:38:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60608 /var/tmp/spdk-nbd.sock 00:07:37.582 21:38:45 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60608 ']' 00:07:37.582 21:38:45 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:37.582 21:38:45 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:37.582 21:38:45 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:37.839 21:38:45 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.839 21:38:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:37.839 21:38:45 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.839 21:38:45 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:37.839 21:38:45 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:38.096 Malloc0 00:07:38.096 21:38:45 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:38.354 Malloc1 00:07:38.613 21:38:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:38.613 21:38:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:38.613 21:38:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:38.613 21:38:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:38.613 21:38:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:38.613 21:38:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:38.613 21:38:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:38.613 21:38:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:38.613 21:38:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:38.613 21:38:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:38.613 21:38:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:38.613 21:38:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:38.613 21:38:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:38.613 21:38:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:38.613 21:38:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:38.613 21:38:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:38.613 /dev/nbd0 00:07:38.871 21:38:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:38.871 21:38:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:38.871 21:38:46 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:38.871 21:38:46 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:38.871 21:38:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:38.871 21:38:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:38.871 21:38:46 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:38.871 21:38:46 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:38.871 21:38:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:38.871 21:38:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:38.871 21:38:46 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:38.871 1+0 records in 00:07:38.871 1+0 records out 00:07:38.871 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351811 s, 11.6 MB/s 00:07:38.871 21:38:46 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:38.871 21:38:46 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:38.871 21:38:46 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:38.871 21:38:46 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:38.871 21:38:46 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:38.871 21:38:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:38.871 21:38:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:38.871 21:38:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:38.871 /dev/nbd1 00:07:39.130 21:38:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:39.130 21:38:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:39.130 21:38:46 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:39.130 21:38:46 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:39.130 21:38:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:39.130 21:38:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:39.130 21:38:46 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:39.130 21:38:46 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:39.130 21:38:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:39.130 21:38:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:39.130 21:38:46 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:39.130 1+0 records in 00:07:39.130 1+0 records out 00:07:39.130 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374888 s, 10.9 MB/s 00:07:39.130 21:38:46 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:39.130 21:38:46 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:39.130 21:38:46 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:39.130 21:38:46 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:39.130 21:38:46 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:39.130 21:38:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:39.130 21:38:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:39.130 21:38:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:39.130 21:38:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.130 21:38:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:39.130 21:38:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:39.130 { 00:07:39.130 "nbd_device": "/dev/nbd0", 00:07:39.130 "bdev_name": "Malloc0" 00:07:39.130 }, 00:07:39.130 { 00:07:39.130 "nbd_device": "/dev/nbd1", 00:07:39.130 "bdev_name": "Malloc1" 00:07:39.130 } 00:07:39.130 ]' 00:07:39.130 21:38:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:39.130 { 00:07:39.130 "nbd_device": "/dev/nbd0", 00:07:39.130 "bdev_name": "Malloc0" 00:07:39.130 }, 00:07:39.130 { 00:07:39.130 "nbd_device": "/dev/nbd1", 00:07:39.130 "bdev_name": "Malloc1" 00:07:39.130 } 00:07:39.130 ]' 00:07:39.130 21:38:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:39.388 21:38:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:39.388 /dev/nbd1' 00:07:39.388 21:38:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:39.388 21:38:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:39.388 /dev/nbd1' 00:07:39.389 21:38:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:39.389 21:38:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:39.389 21:38:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:39.389 21:38:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:39.389 21:38:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:39.389 21:38:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:39.389 21:38:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:39.389 21:38:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:39.389 21:38:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:39.389 21:38:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:39.389 21:38:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:39.389 256+0 records in 00:07:39.389 256+0 records out 00:07:39.389 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128635 s, 81.5 MB/s 00:07:39.389 21:38:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:39.389 21:38:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:39.389 256+0 records in 00:07:39.389 256+0 records out 00:07:39.389 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0338671 s, 31.0 MB/s 00:07:39.389 21:38:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:39.389 21:38:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:39.389 256+0 records in 00:07:39.389 256+0 records out 00:07:39.389 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0371361 s, 28.2 MB/s 00:07:39.389 21:38:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:39.389 21:38:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:39.389 21:38:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:39.389 21:38:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:39.389 21:38:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:39.389 21:38:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:39.389 21:38:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:39.389 21:38:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:39.389 21:38:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:39.389 21:38:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:39.389 21:38:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:39.389 21:38:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:39.389 21:38:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:39.389 21:38:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.389 21:38:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:39.389 21:38:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:39.389 21:38:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:39.389 21:38:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:39.389 21:38:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:39.648 21:38:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:39.648 21:38:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:39.648 21:38:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:39.648 21:38:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:39.648 21:38:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:39.648 21:38:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:39.648 21:38:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:39.648 21:38:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:39.648 21:38:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:39.648 21:38:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:39.907 21:38:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:39.907 21:38:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:39.907 21:38:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:39.907 21:38:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:39.907 21:38:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:39.907 21:38:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:39.907 21:38:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:39.907 21:38:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:39.907 21:38:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:39.907 21:38:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.907 21:38:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:40.166 21:38:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:40.166 21:38:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:40.166 21:38:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:40.166 21:38:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:40.166 21:38:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:40.166 21:38:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:40.166 21:38:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:40.166 21:38:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:40.166 21:38:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:40.166 21:38:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:40.166 21:38:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:40.166 21:38:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:40.166 21:38:47 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:40.733 21:38:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:42.110 [2024-12-10 21:38:49.485851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:42.110 [2024-12-10 21:38:49.616272] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.110 [2024-12-10 21:38:49.616272] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.368 [2024-12-10 21:38:49.843316] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:42.368 [2024-12-10 21:38:49.843388] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:43.745 21:38:51 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60608 /var/tmp/spdk-nbd.sock 00:07:43.745 21:38:51 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60608 ']' 00:07:43.745 21:38:51 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:43.745 21:38:51 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:43.745 21:38:51 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:43.745 21:38:51 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.745 21:38:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:43.745 21:38:51 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.745 21:38:51 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:43.745 21:38:51 event.app_repeat -- event/event.sh@39 -- # killprocess 60608 00:07:43.745 21:38:51 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 60608 ']' 00:07:43.745 21:38:51 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 60608 00:07:43.745 21:38:51 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:43.745 21:38:51 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:43.745 21:38:51 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60608 00:07:44.003 21:38:51 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:44.003 21:38:51 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:44.003 killing process with pid 60608 00:07:44.003 21:38:51 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60608' 00:07:44.003 21:38:51 event.app_repeat -- common/autotest_common.sh@973 -- # kill 60608 00:07:44.003 21:38:51 event.app_repeat -- common/autotest_common.sh@978 -- # wait 60608 00:07:44.939 spdk_app_start is called in Round 0. 00:07:44.939 Shutdown signal received, stop current app iteration 00:07:44.939 Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 reinitialization... 00:07:44.939 spdk_app_start is called in Round 1. 00:07:44.939 Shutdown signal received, stop current app iteration 00:07:44.939 Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 reinitialization... 00:07:44.939 spdk_app_start is called in Round 2. 00:07:44.939 Shutdown signal received, stop current app iteration 00:07:44.939 Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 reinitialization... 00:07:44.939 spdk_app_start is called in Round 3. 00:07:44.939 Shutdown signal received, stop current app iteration 00:07:44.939 21:38:52 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:44.939 21:38:52 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:44.939 00:07:44.939 real 0m20.088s 00:07:44.939 user 0m42.286s 00:07:44.939 sys 0m3.689s 00:07:44.939 21:38:52 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.939 21:38:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:44.939 ************************************ 00:07:44.939 END TEST app_repeat 00:07:44.939 ************************************ 00:07:45.198 21:38:52 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:45.199 21:38:52 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:45.199 21:38:52 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:45.199 21:38:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.199 21:38:52 event -- common/autotest_common.sh@10 -- # set +x 00:07:45.199 ************************************ 00:07:45.199 START TEST cpu_locks 00:07:45.199 ************************************ 00:07:45.199 21:38:52 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:45.199 * Looking for test storage... 00:07:45.199 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:45.199 21:38:52 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:45.199 21:38:52 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:07:45.199 21:38:52 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:45.458 21:38:52 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:45.458 21:38:52 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.458 21:38:52 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.458 21:38:52 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.458 21:38:52 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.458 21:38:52 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.458 21:38:52 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.458 21:38:52 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.458 21:38:52 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.458 21:38:52 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.459 21:38:52 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.459 21:38:52 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.459 21:38:52 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:45.459 21:38:52 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:45.459 21:38:52 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.459 21:38:52 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.459 21:38:52 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:45.459 21:38:52 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:45.459 21:38:52 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.459 21:38:52 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:45.459 21:38:52 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.459 21:38:52 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:45.459 21:38:52 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:45.459 21:38:52 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.459 21:38:52 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:45.459 21:38:52 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.459 21:38:52 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.459 21:38:52 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.459 21:38:52 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:45.459 21:38:52 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.459 21:38:52 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:45.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.459 --rc genhtml_branch_coverage=1 00:07:45.459 --rc genhtml_function_coverage=1 00:07:45.459 --rc genhtml_legend=1 00:07:45.459 --rc geninfo_all_blocks=1 00:07:45.459 --rc geninfo_unexecuted_blocks=1 00:07:45.459 00:07:45.459 ' 00:07:45.459 21:38:52 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:45.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.459 --rc genhtml_branch_coverage=1 00:07:45.459 --rc genhtml_function_coverage=1 00:07:45.459 --rc genhtml_legend=1 00:07:45.459 --rc geninfo_all_blocks=1 00:07:45.459 --rc geninfo_unexecuted_blocks=1 00:07:45.459 00:07:45.459 ' 00:07:45.459 21:38:52 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:45.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.459 --rc genhtml_branch_coverage=1 00:07:45.459 --rc genhtml_function_coverage=1 00:07:45.459 --rc genhtml_legend=1 00:07:45.459 --rc geninfo_all_blocks=1 00:07:45.459 --rc geninfo_unexecuted_blocks=1 00:07:45.459 00:07:45.459 ' 00:07:45.459 21:38:52 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:45.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.459 --rc genhtml_branch_coverage=1 00:07:45.459 --rc genhtml_function_coverage=1 00:07:45.459 --rc genhtml_legend=1 00:07:45.459 --rc geninfo_all_blocks=1 00:07:45.459 --rc geninfo_unexecuted_blocks=1 00:07:45.459 00:07:45.459 ' 00:07:45.459 21:38:52 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:45.459 21:38:52 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:45.459 21:38:52 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:45.459 21:38:52 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:45.459 21:38:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:45.459 21:38:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.459 21:38:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:45.459 ************************************ 00:07:45.459 START TEST default_locks 00:07:45.459 ************************************ 00:07:45.459 21:38:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:45.459 21:38:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=61067 00:07:45.459 21:38:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:45.459 21:38:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 61067 00:07:45.459 21:38:52 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 61067 ']' 00:07:45.459 21:38:52 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.459 21:38:52 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.459 21:38:52 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.459 21:38:52 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.459 21:38:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:45.459 [2024-12-10 21:38:53.080858] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:07:45.459 [2024-12-10 21:38:53.080993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61067 ] 00:07:45.718 [2024-12-10 21:38:53.252092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.718 [2024-12-10 21:38:53.389636] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.688 21:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:46.688 21:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:46.688 21:38:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 61067 00:07:46.688 21:38:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 61067 00:07:46.688 21:38:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:47.256 21:38:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 61067 00:07:47.256 21:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 61067 ']' 00:07:47.256 21:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 61067 00:07:47.256 21:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:47.256 21:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:47.256 21:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61067 00:07:47.256 21:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:47.256 21:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:47.256 killing process with pid 61067 00:07:47.256 21:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61067' 00:07:47.257 21:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 61067 00:07:47.257 21:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 61067 00:07:49.788 21:38:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 61067 00:07:49.788 21:38:57 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:49.788 21:38:57 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61067 00:07:49.788 21:38:57 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:49.788 21:38:57 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:49.788 21:38:57 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:49.788 21:38:57 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:49.788 21:38:57 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 61067 00:07:49.788 21:38:57 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 61067 ']' 00:07:49.788 21:38:57 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.788 21:38:57 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.788 21:38:57 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.788 21:38:57 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.788 21:38:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:49.788 ERROR: process (pid: 61067) is no longer running 00:07:49.788 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61067) - No such process 00:07:49.788 21:38:57 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:49.788 21:38:57 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:49.788 21:38:57 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:49.788 21:38:57 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:49.788 21:38:57 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:49.788 21:38:57 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:49.788 21:38:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:49.788 21:38:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:49.788 21:38:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:49.788 ************************************ 00:07:49.788 END TEST default_locks 00:07:49.788 ************************************ 00:07:49.788 21:38:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:49.788 00:07:49.788 real 0m4.553s 00:07:49.788 user 0m4.361s 00:07:49.788 sys 0m0.843s 00:07:49.788 21:38:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.788 21:38:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:50.047 21:38:57 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:50.047 21:38:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.047 21:38:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.047 21:38:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:50.047 ************************************ 00:07:50.047 START TEST default_locks_via_rpc 00:07:50.047 ************************************ 00:07:50.047 21:38:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:50.047 21:38:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=61146 00:07:50.047 21:38:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:50.047 21:38:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 61146 00:07:50.047 21:38:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61146 ']' 00:07:50.047 21:38:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.047 21:38:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.047 21:38:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.047 21:38:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.047 21:38:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.047 [2024-12-10 21:38:57.705232] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:07:50.047 [2024-12-10 21:38:57.705374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61146 ] 00:07:50.306 [2024-12-10 21:38:57.886688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.306 [2024-12-10 21:38:58.026864] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.694 21:38:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:51.694 21:38:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:51.694 21:38:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:51.694 21:38:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.694 21:38:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.694 21:38:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.694 21:38:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:51.694 21:38:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:51.694 21:38:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:51.694 21:38:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:51.694 21:38:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:51.694 21:38:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.694 21:38:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.694 21:38:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.694 21:38:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 61146 00:07:51.694 21:38:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 61146 00:07:51.694 21:38:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:51.954 21:38:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 61146 00:07:51.954 21:38:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 61146 ']' 00:07:51.954 21:38:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 61146 00:07:51.954 21:38:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:51.954 21:38:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:51.954 21:38:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61146 00:07:51.954 21:38:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:51.954 21:38:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:51.954 killing process with pid 61146 00:07:51.954 21:38:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61146' 00:07:51.954 21:38:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 61146 00:07:51.954 21:38:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 61146 00:07:55.242 00:07:55.242 real 0m4.658s 00:07:55.242 user 0m4.461s 00:07:55.242 sys 0m0.869s 00:07:55.243 21:39:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.243 21:39:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.243 ************************************ 00:07:55.243 END TEST default_locks_via_rpc 00:07:55.243 ************************************ 00:07:55.243 21:39:02 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:55.243 21:39:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:55.243 21:39:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.243 21:39:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:55.243 ************************************ 00:07:55.243 START TEST non_locking_app_on_locked_coremask 00:07:55.243 ************************************ 00:07:55.243 21:39:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:55.243 21:39:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=61228 00:07:55.243 21:39:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:55.243 21:39:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 61228 /var/tmp/spdk.sock 00:07:55.243 21:39:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61228 ']' 00:07:55.243 21:39:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.243 21:39:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:55.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.243 21:39:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.243 21:39:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:55.243 21:39:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:55.243 [2024-12-10 21:39:02.438447] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:07:55.243 [2024-12-10 21:39:02.438597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61228 ] 00:07:55.243 [2024-12-10 21:39:02.618755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.243 [2024-12-10 21:39:02.753621] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.180 21:39:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:56.180 21:39:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:56.180 21:39:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:56.180 21:39:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=61244 00:07:56.180 21:39:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 61244 /var/tmp/spdk2.sock 00:07:56.180 21:39:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61244 ']' 00:07:56.180 21:39:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:56.180 21:39:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:56.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:56.180 21:39:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:56.180 21:39:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:56.180 21:39:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:56.180 [2024-12-10 21:39:03.875617] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:07:56.180 [2024-12-10 21:39:03.876589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61244 ] 00:07:56.439 [2024-12-10 21:39:04.062765] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:56.439 [2024-12-10 21:39:04.062817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.698 [2024-12-10 21:39:04.350702] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.230 21:39:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:59.230 21:39:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:59.230 21:39:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 61228 00:07:59.230 21:39:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61228 00:07:59.230 21:39:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:59.798 21:39:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 61228 00:07:59.798 21:39:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61228 ']' 00:07:59.798 21:39:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61228 00:07:59.798 21:39:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:59.798 21:39:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:59.798 21:39:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61228 00:07:59.798 killing process with pid 61228 00:07:59.798 21:39:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:59.798 21:39:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:59.798 21:39:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61228' 00:07:59.798 21:39:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61228 00:07:59.798 21:39:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61228 00:08:05.083 21:39:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 61244 00:08:05.083 21:39:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61244 ']' 00:08:05.083 21:39:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61244 00:08:05.083 21:39:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:05.083 21:39:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:05.083 21:39:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61244 00:08:05.083 killing process with pid 61244 00:08:05.083 21:39:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:05.083 21:39:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:05.083 21:39:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61244' 00:08:05.083 21:39:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61244 00:08:05.083 21:39:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61244 00:08:07.621 ************************************ 00:08:07.621 END TEST non_locking_app_on_locked_coremask 00:08:07.621 ************************************ 00:08:07.621 00:08:07.621 real 0m12.655s 00:08:07.621 user 0m12.748s 00:08:07.621 sys 0m1.605s 00:08:07.621 21:39:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.621 21:39:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:07.621 21:39:15 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:07.621 21:39:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:07.621 21:39:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.621 21:39:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:07.621 ************************************ 00:08:07.621 START TEST locking_app_on_unlocked_coremask 00:08:07.621 ************************************ 00:08:07.622 21:39:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:08:07.622 21:39:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=61403 00:08:07.622 21:39:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:07.622 21:39:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 61403 /var/tmp/spdk.sock 00:08:07.622 21:39:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61403 ']' 00:08:07.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.622 21:39:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.622 21:39:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:07.622 21:39:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.622 21:39:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:07.622 21:39:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:07.622 [2024-12-10 21:39:15.172513] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:08:07.622 [2024-12-10 21:39:15.172857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61403 ] 00:08:07.622 [2024-12-10 21:39:15.347829] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:07.622 [2024-12-10 21:39:15.348123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.881 [2024-12-10 21:39:15.485141] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.820 21:39:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:08.820 21:39:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:08.820 21:39:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=61419 00:08:08.820 21:39:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:08.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:08.820 21:39:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 61419 /var/tmp/spdk2.sock 00:08:08.820 21:39:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61419 ']' 00:08:08.820 21:39:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:08.820 21:39:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:08.820 21:39:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:08.820 21:39:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:08.820 21:39:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:08.820 [2024-12-10 21:39:16.480984] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:08:08.820 [2024-12-10 21:39:16.481125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61419 ] 00:08:09.079 [2024-12-10 21:39:16.668314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.339 [2024-12-10 21:39:16.939869] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.876 21:39:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.876 21:39:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:11.876 21:39:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 61419 00:08:11.876 21:39:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61419 00:08:11.876 21:39:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:12.443 21:39:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 61403 00:08:12.443 21:39:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61403 ']' 00:08:12.443 21:39:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 61403 00:08:12.443 21:39:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:12.443 21:39:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:12.443 21:39:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61403 00:08:12.443 killing process with pid 61403 00:08:12.443 21:39:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:12.443 21:39:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:12.443 21:39:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61403' 00:08:12.443 21:39:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 61403 00:08:12.443 21:39:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 61403 00:08:17.713 21:39:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 61419 00:08:17.713 21:39:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61419 ']' 00:08:17.713 21:39:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 61419 00:08:17.713 21:39:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:17.713 21:39:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:17.713 21:39:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61419 00:08:17.713 killing process with pid 61419 00:08:17.713 21:39:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:17.713 21:39:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:17.713 21:39:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61419' 00:08:17.713 21:39:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 61419 00:08:17.713 21:39:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 61419 00:08:20.256 00:08:20.256 real 0m12.528s 00:08:20.256 user 0m12.673s 00:08:20.256 sys 0m1.650s 00:08:20.256 21:39:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.256 ************************************ 00:08:20.256 21:39:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:20.256 END TEST locking_app_on_unlocked_coremask 00:08:20.256 ************************************ 00:08:20.256 21:39:27 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:20.256 21:39:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:20.256 21:39:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.256 21:39:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:20.256 ************************************ 00:08:20.256 START TEST locking_app_on_locked_coremask 00:08:20.256 ************************************ 00:08:20.256 21:39:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:08:20.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.256 21:39:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61578 00:08:20.256 21:39:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:20.256 21:39:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 61578 /var/tmp/spdk.sock 00:08:20.256 21:39:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61578 ']' 00:08:20.256 21:39:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.256 21:39:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:20.256 21:39:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.256 21:39:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:20.256 21:39:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:20.256 [2024-12-10 21:39:27.772352] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:08:20.256 [2024-12-10 21:39:27.772704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61578 ] 00:08:20.256 [2024-12-10 21:39:27.955697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.539 [2024-12-10 21:39:28.102802] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.477 21:39:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:21.477 21:39:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:21.477 21:39:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61600 00:08:21.477 21:39:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:21.477 21:39:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61600 /var/tmp/spdk2.sock 00:08:21.477 21:39:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:08:21.477 21:39:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61600 /var/tmp/spdk2.sock 00:08:21.477 21:39:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:21.477 21:39:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:21.477 21:39:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:21.477 21:39:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:21.477 21:39:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61600 /var/tmp/spdk2.sock 00:08:21.477 21:39:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61600 ']' 00:08:21.477 21:39:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:21.477 21:39:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:21.477 21:39:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:21.477 21:39:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.477 21:39:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:21.477 [2024-12-10 21:39:29.199258] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:08:21.477 [2024-12-10 21:39:29.199396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61600 ] 00:08:21.735 [2024-12-10 21:39:29.384105] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61578 has claimed it. 00:08:21.735 [2024-12-10 21:39:29.384177] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:22.302 ERROR: process (pid: 61600) is no longer running 00:08:22.302 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61600) - No such process 00:08:22.302 21:39:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:22.302 21:39:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:08:22.302 21:39:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:08:22.302 21:39:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:22.302 21:39:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:22.302 21:39:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:22.302 21:39:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 61578 00:08:22.302 21:39:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61578 00:08:22.302 21:39:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:22.871 21:39:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 61578 00:08:22.871 21:39:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61578 ']' 00:08:22.871 21:39:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61578 00:08:22.871 21:39:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:22.871 21:39:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:22.871 21:39:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61578 00:08:22.871 21:39:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:22.871 21:39:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:22.871 21:39:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61578' 00:08:22.871 killing process with pid 61578 00:08:22.871 21:39:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61578 00:08:22.871 21:39:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61578 00:08:25.407 00:08:25.407 real 0m5.300s 00:08:25.407 user 0m5.319s 00:08:25.407 sys 0m1.038s 00:08:25.407 21:39:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.407 ************************************ 00:08:25.407 END TEST locking_app_on_locked_coremask 00:08:25.407 ************************************ 00:08:25.407 21:39:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:25.407 21:39:33 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:25.407 21:39:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:25.407 21:39:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.407 21:39:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:25.407 ************************************ 00:08:25.407 START TEST locking_overlapped_coremask 00:08:25.407 ************************************ 00:08:25.407 21:39:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:08:25.407 21:39:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61675 00:08:25.407 21:39:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61675 /var/tmp/spdk.sock 00:08:25.407 21:39:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:08:25.407 21:39:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61675 ']' 00:08:25.407 21:39:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.407 21:39:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:25.407 21:39:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.407 21:39:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:25.407 21:39:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:25.666 [2024-12-10 21:39:33.143626] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:08:25.666 [2024-12-10 21:39:33.143973] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61675 ] 00:08:25.666 [2024-12-10 21:39:33.330136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:25.925 [2024-12-10 21:39:33.474133] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.925 [2024-12-10 21:39:33.474256] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.925 [2024-12-10 21:39:33.474291] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:26.861 21:39:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:26.861 21:39:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:26.861 21:39:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61693 00:08:26.861 21:39:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:26.861 21:39:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61693 /var/tmp/spdk2.sock 00:08:26.861 21:39:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:08:26.861 21:39:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61693 /var/tmp/spdk2.sock 00:08:26.861 21:39:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:26.861 21:39:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:26.861 21:39:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:26.862 21:39:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:26.862 21:39:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61693 /var/tmp/spdk2.sock 00:08:26.862 21:39:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61693 ']' 00:08:26.862 21:39:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:26.862 21:39:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:26.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:26.862 21:39:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:26.862 21:39:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:26.862 21:39:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:27.120 [2024-12-10 21:39:34.613575] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:08:27.120 [2024-12-10 21:39:34.613711] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61693 ] 00:08:27.120 [2024-12-10 21:39:34.800467] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61675 has claimed it. 00:08:27.120 [2024-12-10 21:39:34.800534] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:27.687 ERROR: process (pid: 61693) is no longer running 00:08:27.687 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61693) - No such process 00:08:27.687 21:39:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:27.687 21:39:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:08:27.687 21:39:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:08:27.687 21:39:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:27.687 21:39:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:27.687 21:39:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:27.687 21:39:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:27.687 21:39:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:27.687 21:39:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:27.687 21:39:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:27.687 21:39:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61675 00:08:27.687 21:39:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 61675 ']' 00:08:27.687 21:39:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 61675 00:08:27.687 21:39:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:08:27.687 21:39:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:27.687 21:39:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61675 00:08:27.687 21:39:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:27.687 21:39:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:27.687 21:39:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61675' 00:08:27.687 killing process with pid 61675 00:08:27.688 21:39:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 61675 00:08:27.688 21:39:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 61675 00:08:30.221 00:08:30.221 real 0m4.888s 00:08:30.221 user 0m13.045s 00:08:30.221 sys 0m0.818s 00:08:30.221 21:39:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.221 21:39:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:30.221 ************************************ 00:08:30.221 END TEST locking_overlapped_coremask 00:08:30.221 ************************************ 00:08:30.480 21:39:37 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:30.480 21:39:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:30.480 21:39:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.480 21:39:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:30.480 ************************************ 00:08:30.480 START TEST locking_overlapped_coremask_via_rpc 00:08:30.480 ************************************ 00:08:30.480 21:39:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:08:30.480 21:39:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61762 00:08:30.480 21:39:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61762 /var/tmp/spdk.sock 00:08:30.480 21:39:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:30.480 21:39:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61762 ']' 00:08:30.480 21:39:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.480 21:39:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:30.480 21:39:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.480 21:39:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:30.480 21:39:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.480 [2024-12-10 21:39:38.112673] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:08:30.480 [2024-12-10 21:39:38.113005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61762 ] 00:08:30.739 [2024-12-10 21:39:38.307709] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:30.739 [2024-12-10 21:39:38.307762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:30.739 [2024-12-10 21:39:38.446507] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.739 [2024-12-10 21:39:38.446634] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.739 [2024-12-10 21:39:38.446665] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.116 21:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:32.116 21:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:32.117 21:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61786 00:08:32.117 21:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61786 /var/tmp/spdk2.sock 00:08:32.117 21:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:32.117 21:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61786 ']' 00:08:32.117 21:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:32.117 21:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:32.117 21:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:32.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:32.117 21:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:32.117 21:39:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:32.117 [2024-12-10 21:39:39.543755] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:08:32.117 [2024-12-10 21:39:39.544141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61786 ] 00:08:32.117 [2024-12-10 21:39:39.730627] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:32.117 [2024-12-10 21:39:39.730694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:32.376 [2024-12-10 21:39:40.027084] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.376 [2024-12-10 21:39:40.030231] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.376 [2024-12-10 21:39:40.030267] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:08:34.952 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.952 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:34.952 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:34.952 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.952 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.952 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.952 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:34.952 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:08:34.952 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:34.952 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:34.952 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.952 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:34.952 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.952 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:34.952 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.952 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.952 [2024-12-10 21:39:42.122241] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61762 has claimed it. 00:08:34.952 request: 00:08:34.952 { 00:08:34.952 "method": "framework_enable_cpumask_locks", 00:08:34.952 "req_id": 1 00:08:34.952 } 00:08:34.952 Got JSON-RPC error response 00:08:34.952 response: 00:08:34.952 { 00:08:34.952 "code": -32603, 00:08:34.952 "message": "Failed to claim CPU core: 2" 00:08:34.952 } 00:08:34.953 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:34.953 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:08:34.953 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:34.953 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:34.953 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:34.953 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61762 /var/tmp/spdk.sock 00:08:34.953 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61762 ']' 00:08:34.953 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.953 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:34.953 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.953 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:34.953 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.953 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.953 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:34.953 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61786 /var/tmp/spdk2.sock 00:08:34.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:34.953 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61786 ']' 00:08:34.953 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:34.953 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:34.953 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:34.953 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:34.953 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.953 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.953 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:34.953 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:34.953 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:34.953 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:34.953 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:34.953 00:08:34.953 real 0m4.578s 00:08:34.953 user 0m1.243s 00:08:34.953 sys 0m0.271s 00:08:34.953 ************************************ 00:08:34.953 END TEST locking_overlapped_coremask_via_rpc 00:08:34.953 ************************************ 00:08:34.953 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.953 21:39:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.953 21:39:42 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:34.953 21:39:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61762 ]] 00:08:34.953 21:39:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61762 00:08:34.953 21:39:42 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61762 ']' 00:08:34.953 21:39:42 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61762 00:08:34.953 21:39:42 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:34.953 21:39:42 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:34.953 21:39:42 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61762 00:08:34.953 killing process with pid 61762 00:08:34.953 21:39:42 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:34.953 21:39:42 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:34.953 21:39:42 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61762' 00:08:34.953 21:39:42 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61762 00:08:34.953 21:39:42 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61762 00:08:38.239 21:39:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61786 ]] 00:08:38.240 21:39:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61786 00:08:38.240 21:39:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61786 ']' 00:08:38.240 21:39:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61786 00:08:38.240 21:39:45 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:38.240 21:39:45 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:38.240 21:39:45 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61786 00:08:38.240 21:39:45 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:38.240 killing process with pid 61786 00:08:38.240 21:39:45 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:38.240 21:39:45 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61786' 00:08:38.240 21:39:45 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61786 00:08:38.240 21:39:45 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61786 00:08:40.144 21:39:47 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:40.144 21:39:47 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:40.144 21:39:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61762 ]] 00:08:40.144 Process with pid 61762 is not found 00:08:40.144 Process with pid 61786 is not found 00:08:40.144 21:39:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61762 00:08:40.144 21:39:47 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61762 ']' 00:08:40.144 21:39:47 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61762 00:08:40.144 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61762) - No such process 00:08:40.144 21:39:47 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61762 is not found' 00:08:40.144 21:39:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61786 ]] 00:08:40.144 21:39:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61786 00:08:40.144 21:39:47 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61786 ']' 00:08:40.144 21:39:47 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61786 00:08:40.144 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61786) - No such process 00:08:40.144 21:39:47 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61786 is not found' 00:08:40.144 21:39:47 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:40.144 00:08:40.144 real 0m55.114s 00:08:40.144 user 1m31.195s 00:08:40.144 sys 0m8.641s 00:08:40.144 21:39:47 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.144 21:39:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:40.144 ************************************ 00:08:40.144 END TEST cpu_locks 00:08:40.144 ************************************ 00:08:40.403 00:08:40.403 real 1m26.851s 00:08:40.403 user 2m34.073s 00:08:40.403 sys 0m13.695s 00:08:40.403 21:39:47 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.403 ************************************ 00:08:40.403 END TEST event 00:08:40.403 ************************************ 00:08:40.403 21:39:47 event -- common/autotest_common.sh@10 -- # set +x 00:08:40.403 21:39:47 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:40.403 21:39:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:40.403 21:39:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.403 21:39:47 -- common/autotest_common.sh@10 -- # set +x 00:08:40.403 ************************************ 00:08:40.403 START TEST thread 00:08:40.403 ************************************ 00:08:40.403 21:39:47 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:40.403 * Looking for test storage... 00:08:40.403 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:40.403 21:39:48 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:40.403 21:39:48 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:08:40.403 21:39:48 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:40.662 21:39:48 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:40.662 21:39:48 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:40.662 21:39:48 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:40.662 21:39:48 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:40.662 21:39:48 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:40.663 21:39:48 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:40.663 21:39:48 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:40.663 21:39:48 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:40.663 21:39:48 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:40.663 21:39:48 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:40.663 21:39:48 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:40.663 21:39:48 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:40.663 21:39:48 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:40.663 21:39:48 thread -- scripts/common.sh@345 -- # : 1 00:08:40.663 21:39:48 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:40.663 21:39:48 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:40.663 21:39:48 thread -- scripts/common.sh@365 -- # decimal 1 00:08:40.663 21:39:48 thread -- scripts/common.sh@353 -- # local d=1 00:08:40.663 21:39:48 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:40.663 21:39:48 thread -- scripts/common.sh@355 -- # echo 1 00:08:40.663 21:39:48 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:40.663 21:39:48 thread -- scripts/common.sh@366 -- # decimal 2 00:08:40.663 21:39:48 thread -- scripts/common.sh@353 -- # local d=2 00:08:40.663 21:39:48 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:40.663 21:39:48 thread -- scripts/common.sh@355 -- # echo 2 00:08:40.663 21:39:48 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:40.663 21:39:48 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:40.663 21:39:48 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:40.663 21:39:48 thread -- scripts/common.sh@368 -- # return 0 00:08:40.663 21:39:48 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:40.663 21:39:48 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:40.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.663 --rc genhtml_branch_coverage=1 00:08:40.663 --rc genhtml_function_coverage=1 00:08:40.663 --rc genhtml_legend=1 00:08:40.663 --rc geninfo_all_blocks=1 00:08:40.663 --rc geninfo_unexecuted_blocks=1 00:08:40.663 00:08:40.663 ' 00:08:40.663 21:39:48 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:40.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.663 --rc genhtml_branch_coverage=1 00:08:40.663 --rc genhtml_function_coverage=1 00:08:40.663 --rc genhtml_legend=1 00:08:40.663 --rc geninfo_all_blocks=1 00:08:40.663 --rc geninfo_unexecuted_blocks=1 00:08:40.663 00:08:40.663 ' 00:08:40.663 21:39:48 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:40.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.663 --rc genhtml_branch_coverage=1 00:08:40.663 --rc genhtml_function_coverage=1 00:08:40.663 --rc genhtml_legend=1 00:08:40.663 --rc geninfo_all_blocks=1 00:08:40.663 --rc geninfo_unexecuted_blocks=1 00:08:40.663 00:08:40.663 ' 00:08:40.663 21:39:48 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:40.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.663 --rc genhtml_branch_coverage=1 00:08:40.663 --rc genhtml_function_coverage=1 00:08:40.663 --rc genhtml_legend=1 00:08:40.663 --rc geninfo_all_blocks=1 00:08:40.663 --rc geninfo_unexecuted_blocks=1 00:08:40.663 00:08:40.663 ' 00:08:40.663 21:39:48 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:40.663 21:39:48 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:40.663 21:39:48 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.663 21:39:48 thread -- common/autotest_common.sh@10 -- # set +x 00:08:40.663 ************************************ 00:08:40.663 START TEST thread_poller_perf 00:08:40.663 ************************************ 00:08:40.663 21:39:48 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:40.663 [2024-12-10 21:39:48.264699] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:08:40.663 [2024-12-10 21:39:48.265592] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61981 ] 00:08:40.922 [2024-12-10 21:39:48.450216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.922 [2024-12-10 21:39:48.590215] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.922 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:42.300 [2024-12-10T21:39:50.031Z] ====================================== 00:08:42.300 [2024-12-10T21:39:50.031Z] busy:2502988308 (cyc) 00:08:42.300 [2024-12-10T21:39:50.032Z] total_run_count: 391000 00:08:42.301 [2024-12-10T21:39:50.032Z] tsc_hz: 2490000000 (cyc) 00:08:42.301 [2024-12-10T21:39:50.032Z] ====================================== 00:08:42.301 [2024-12-10T21:39:50.032Z] poller_cost: 6401 (cyc), 2570 (nsec) 00:08:42.301 00:08:42.301 real 0m1.629s 00:08:42.301 user 0m1.411s 00:08:42.301 sys 0m0.108s 00:08:42.301 21:39:49 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.301 ************************************ 00:08:42.301 END TEST thread_poller_perf 00:08:42.301 ************************************ 00:08:42.301 21:39:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:42.301 21:39:49 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:42.301 21:39:49 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:42.301 21:39:49 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.301 21:39:49 thread -- common/autotest_common.sh@10 -- # set +x 00:08:42.301 ************************************ 00:08:42.301 START TEST thread_poller_perf 00:08:42.301 ************************************ 00:08:42.301 21:39:49 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:42.301 [2024-12-10 21:39:49.967666] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:08:42.301 [2024-12-10 21:39:49.967782] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62023 ] 00:08:42.559 [2024-12-10 21:39:50.154199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.818 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:42.818 [2024-12-10 21:39:50.297655] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.197 [2024-12-10T21:39:51.928Z] ====================================== 00:08:44.197 [2024-12-10T21:39:51.928Z] busy:2494105498 (cyc) 00:08:44.197 [2024-12-10T21:39:51.928Z] total_run_count: 4711000 00:08:44.197 [2024-12-10T21:39:51.928Z] tsc_hz: 2490000000 (cyc) 00:08:44.197 [2024-12-10T21:39:51.928Z] ====================================== 00:08:44.197 [2024-12-10T21:39:51.928Z] poller_cost: 529 (cyc), 212 (nsec) 00:08:44.197 00:08:44.197 real 0m1.624s 00:08:44.197 user 0m1.407s 00:08:44.197 sys 0m0.110s 00:08:44.197 21:39:51 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.197 21:39:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:44.197 ************************************ 00:08:44.197 END TEST thread_poller_perf 00:08:44.197 ************************************ 00:08:44.197 21:39:51 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:44.197 00:08:44.197 real 0m3.630s 00:08:44.197 user 0m3.003s 00:08:44.197 sys 0m0.419s 00:08:44.197 ************************************ 00:08:44.197 END TEST thread 00:08:44.197 ************************************ 00:08:44.197 21:39:51 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.197 21:39:51 thread -- common/autotest_common.sh@10 -- # set +x 00:08:44.197 21:39:51 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:44.197 21:39:51 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:44.197 21:39:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:44.197 21:39:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.197 21:39:51 -- common/autotest_common.sh@10 -- # set +x 00:08:44.197 ************************************ 00:08:44.197 START TEST app_cmdline 00:08:44.197 ************************************ 00:08:44.197 21:39:51 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:44.197 * Looking for test storage... 00:08:44.197 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:44.197 21:39:51 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:44.197 21:39:51 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:44.197 21:39:51 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:08:44.197 21:39:51 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:44.197 21:39:51 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:44.197 21:39:51 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:44.197 21:39:51 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:44.197 21:39:51 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:44.197 21:39:51 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:44.197 21:39:51 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:44.197 21:39:51 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:44.197 21:39:51 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:44.197 21:39:51 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:44.197 21:39:51 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:44.197 21:39:51 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:44.197 21:39:51 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:44.197 21:39:51 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:44.197 21:39:51 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:44.197 21:39:51 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:44.197 21:39:51 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:44.197 21:39:51 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:44.197 21:39:51 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:44.197 21:39:51 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:44.197 21:39:51 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:44.197 21:39:51 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:44.197 21:39:51 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:44.197 21:39:51 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:44.197 21:39:51 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:44.197 21:39:51 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:44.197 21:39:51 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:44.197 21:39:51 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:44.197 21:39:51 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:44.197 21:39:51 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:44.197 21:39:51 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:44.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.197 --rc genhtml_branch_coverage=1 00:08:44.197 --rc genhtml_function_coverage=1 00:08:44.197 --rc genhtml_legend=1 00:08:44.197 --rc geninfo_all_blocks=1 00:08:44.197 --rc geninfo_unexecuted_blocks=1 00:08:44.197 00:08:44.197 ' 00:08:44.197 21:39:51 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:44.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.197 --rc genhtml_branch_coverage=1 00:08:44.197 --rc genhtml_function_coverage=1 00:08:44.198 --rc genhtml_legend=1 00:08:44.198 --rc geninfo_all_blocks=1 00:08:44.198 --rc geninfo_unexecuted_blocks=1 00:08:44.198 00:08:44.198 ' 00:08:44.198 21:39:51 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:44.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.198 --rc genhtml_branch_coverage=1 00:08:44.198 --rc genhtml_function_coverage=1 00:08:44.198 --rc genhtml_legend=1 00:08:44.198 --rc geninfo_all_blocks=1 00:08:44.198 --rc geninfo_unexecuted_blocks=1 00:08:44.198 00:08:44.198 ' 00:08:44.198 21:39:51 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:44.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.198 --rc genhtml_branch_coverage=1 00:08:44.198 --rc genhtml_function_coverage=1 00:08:44.198 --rc genhtml_legend=1 00:08:44.198 --rc geninfo_all_blocks=1 00:08:44.198 --rc geninfo_unexecuted_blocks=1 00:08:44.198 00:08:44.198 ' 00:08:44.198 21:39:51 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:44.198 21:39:51 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=62112 00:08:44.198 21:39:51 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:44.198 21:39:51 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 62112 00:08:44.198 21:39:51 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 62112 ']' 00:08:44.198 21:39:51 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.198 21:39:51 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:44.198 21:39:51 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.198 21:39:51 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:44.198 21:39:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:44.457 [2024-12-10 21:39:52.009475] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:08:44.457 [2024-12-10 21:39:52.009877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62112 ] 00:08:44.716 [2024-12-10 21:39:52.194879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.716 [2024-12-10 21:39:52.336018] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.652 21:39:53 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:45.652 21:39:53 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:08:45.652 21:39:53 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:45.911 { 00:08:45.911 "version": "SPDK v25.01-pre git sha1 2104eacf0", 00:08:45.911 "fields": { 00:08:45.911 "major": 25, 00:08:45.911 "minor": 1, 00:08:45.911 "patch": 0, 00:08:45.911 "suffix": "-pre", 00:08:45.911 "commit": "2104eacf0" 00:08:45.911 } 00:08:45.911 } 00:08:45.911 21:39:53 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:45.911 21:39:53 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:45.911 21:39:53 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:45.911 21:39:53 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:45.911 21:39:53 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:45.911 21:39:53 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:45.911 21:39:53 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:45.911 21:39:53 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.911 21:39:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:45.911 21:39:53 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.911 21:39:53 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:45.911 21:39:53 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:45.911 21:39:53 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:45.911 21:39:53 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:08:45.911 21:39:53 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:45.911 21:39:53 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:45.911 21:39:53 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:45.911 21:39:53 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:45.911 21:39:53 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:45.911 21:39:53 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:45.911 21:39:53 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:45.911 21:39:53 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:45.911 21:39:53 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:45.911 21:39:53 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:46.170 request: 00:08:46.170 { 00:08:46.170 "method": "env_dpdk_get_mem_stats", 00:08:46.170 "req_id": 1 00:08:46.170 } 00:08:46.170 Got JSON-RPC error response 00:08:46.170 response: 00:08:46.170 { 00:08:46.170 "code": -32601, 00:08:46.170 "message": "Method not found" 00:08:46.170 } 00:08:46.170 21:39:53 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:08:46.170 21:39:53 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:46.170 21:39:53 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:46.170 21:39:53 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:46.170 21:39:53 app_cmdline -- app/cmdline.sh@1 -- # killprocess 62112 00:08:46.170 21:39:53 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 62112 ']' 00:08:46.170 21:39:53 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 62112 00:08:46.170 21:39:53 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:08:46.170 21:39:53 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:46.170 21:39:53 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62112 00:08:46.170 killing process with pid 62112 00:08:46.170 21:39:53 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:46.170 21:39:53 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:46.170 21:39:53 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62112' 00:08:46.170 21:39:53 app_cmdline -- common/autotest_common.sh@973 -- # kill 62112 00:08:46.170 21:39:53 app_cmdline -- common/autotest_common.sh@978 -- # wait 62112 00:08:48.719 00:08:48.719 real 0m4.548s 00:08:48.719 user 0m4.672s 00:08:48.719 sys 0m0.716s 00:08:48.719 ************************************ 00:08:48.719 END TEST app_cmdline 00:08:48.719 ************************************ 00:08:48.719 21:39:56 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.719 21:39:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:48.719 21:39:56 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:48.719 21:39:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:48.719 21:39:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.719 21:39:56 -- common/autotest_common.sh@10 -- # set +x 00:08:48.719 ************************************ 00:08:48.719 START TEST version 00:08:48.719 ************************************ 00:08:48.719 21:39:56 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:48.719 * Looking for test storage... 00:08:48.719 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:48.719 21:39:56 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:48.719 21:39:56 version -- common/autotest_common.sh@1711 -- # lcov --version 00:08:48.719 21:39:56 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:48.979 21:39:56 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:48.979 21:39:56 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:48.979 21:39:56 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:48.979 21:39:56 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:48.979 21:39:56 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:48.979 21:39:56 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:48.979 21:39:56 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:48.979 21:39:56 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:48.979 21:39:56 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:48.979 21:39:56 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:48.979 21:39:56 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:48.979 21:39:56 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:48.979 21:39:56 version -- scripts/common.sh@344 -- # case "$op" in 00:08:48.979 21:39:56 version -- scripts/common.sh@345 -- # : 1 00:08:48.979 21:39:56 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:48.979 21:39:56 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:48.979 21:39:56 version -- scripts/common.sh@365 -- # decimal 1 00:08:48.979 21:39:56 version -- scripts/common.sh@353 -- # local d=1 00:08:48.979 21:39:56 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:48.979 21:39:56 version -- scripts/common.sh@355 -- # echo 1 00:08:48.979 21:39:56 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:48.979 21:39:56 version -- scripts/common.sh@366 -- # decimal 2 00:08:48.979 21:39:56 version -- scripts/common.sh@353 -- # local d=2 00:08:48.979 21:39:56 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:48.979 21:39:56 version -- scripts/common.sh@355 -- # echo 2 00:08:48.979 21:39:56 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:48.979 21:39:56 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:48.979 21:39:56 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:48.979 21:39:56 version -- scripts/common.sh@368 -- # return 0 00:08:48.979 21:39:56 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:48.979 21:39:56 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:48.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.979 --rc genhtml_branch_coverage=1 00:08:48.979 --rc genhtml_function_coverage=1 00:08:48.979 --rc genhtml_legend=1 00:08:48.980 --rc geninfo_all_blocks=1 00:08:48.980 --rc geninfo_unexecuted_blocks=1 00:08:48.980 00:08:48.980 ' 00:08:48.980 21:39:56 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:48.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.980 --rc genhtml_branch_coverage=1 00:08:48.980 --rc genhtml_function_coverage=1 00:08:48.980 --rc genhtml_legend=1 00:08:48.980 --rc geninfo_all_blocks=1 00:08:48.980 --rc geninfo_unexecuted_blocks=1 00:08:48.980 00:08:48.980 ' 00:08:48.980 21:39:56 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:48.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.980 --rc genhtml_branch_coverage=1 00:08:48.980 --rc genhtml_function_coverage=1 00:08:48.980 --rc genhtml_legend=1 00:08:48.980 --rc geninfo_all_blocks=1 00:08:48.980 --rc geninfo_unexecuted_blocks=1 00:08:48.980 00:08:48.980 ' 00:08:48.980 21:39:56 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:48.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.980 --rc genhtml_branch_coverage=1 00:08:48.980 --rc genhtml_function_coverage=1 00:08:48.980 --rc genhtml_legend=1 00:08:48.980 --rc geninfo_all_blocks=1 00:08:48.980 --rc geninfo_unexecuted_blocks=1 00:08:48.980 00:08:48.980 ' 00:08:48.980 21:39:56 version -- app/version.sh@17 -- # get_header_version major 00:08:48.980 21:39:56 version -- app/version.sh@14 -- # cut -f2 00:08:48.980 21:39:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:48.980 21:39:56 version -- app/version.sh@14 -- # tr -d '"' 00:08:48.980 21:39:56 version -- app/version.sh@17 -- # major=25 00:08:48.980 21:39:56 version -- app/version.sh@18 -- # get_header_version minor 00:08:48.980 21:39:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:48.980 21:39:56 version -- app/version.sh@14 -- # cut -f2 00:08:48.980 21:39:56 version -- app/version.sh@14 -- # tr -d '"' 00:08:48.980 21:39:56 version -- app/version.sh@18 -- # minor=1 00:08:48.980 21:39:56 version -- app/version.sh@19 -- # get_header_version patch 00:08:48.980 21:39:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:48.980 21:39:56 version -- app/version.sh@14 -- # cut -f2 00:08:48.980 21:39:56 version -- app/version.sh@14 -- # tr -d '"' 00:08:48.980 21:39:56 version -- app/version.sh@19 -- # patch=0 00:08:48.980 21:39:56 version -- app/version.sh@20 -- # get_header_version suffix 00:08:48.980 21:39:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:48.980 21:39:56 version -- app/version.sh@14 -- # cut -f2 00:08:48.980 21:39:56 version -- app/version.sh@14 -- # tr -d '"' 00:08:48.980 21:39:56 version -- app/version.sh@20 -- # suffix=-pre 00:08:48.980 21:39:56 version -- app/version.sh@22 -- # version=25.1 00:08:48.980 21:39:56 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:48.980 21:39:56 version -- app/version.sh@28 -- # version=25.1rc0 00:08:48.980 21:39:56 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:48.980 21:39:56 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:48.980 21:39:56 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:48.980 21:39:56 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:48.980 00:08:48.980 real 0m0.319s 00:08:48.980 user 0m0.194s 00:08:48.980 sys 0m0.185s 00:08:48.980 ************************************ 00:08:48.980 END TEST version 00:08:48.980 ************************************ 00:08:48.980 21:39:56 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.980 21:39:56 version -- common/autotest_common.sh@10 -- # set +x 00:08:48.980 21:39:56 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:48.980 21:39:56 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:08:48.980 21:39:56 -- spdk/autotest.sh@194 -- # uname -s 00:08:48.980 21:39:56 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:48.980 21:39:56 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:48.980 21:39:56 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:48.980 21:39:56 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:08:48.980 21:39:56 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:48.980 21:39:56 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:48.980 21:39:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.980 21:39:56 -- common/autotest_common.sh@10 -- # set +x 00:08:48.980 ************************************ 00:08:48.980 START TEST blockdev_nvme 00:08:48.980 ************************************ 00:08:48.980 21:39:56 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:49.239 * Looking for test storage... 00:08:49.239 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:49.239 21:39:56 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:49.239 21:39:56 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:08:49.239 21:39:56 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:49.239 21:39:56 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:49.239 21:39:56 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:49.239 21:39:56 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:49.239 21:39:56 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:49.239 21:39:56 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:49.239 21:39:56 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:49.239 21:39:56 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:49.239 21:39:56 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:49.239 21:39:56 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:49.239 21:39:56 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:49.239 21:39:56 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:49.239 21:39:56 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:49.239 21:39:56 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:49.239 21:39:56 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:08:49.239 21:39:56 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:49.239 21:39:56 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:49.239 21:39:56 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:08:49.239 21:39:56 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:08:49.239 21:39:56 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:49.239 21:39:56 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:08:49.239 21:39:56 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:49.239 21:39:56 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:08:49.239 21:39:56 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:08:49.239 21:39:56 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:49.239 21:39:56 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:08:49.239 21:39:56 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:49.239 21:39:56 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:49.239 21:39:56 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:49.239 21:39:56 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:08:49.239 21:39:56 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:49.239 21:39:56 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:49.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.239 --rc genhtml_branch_coverage=1 00:08:49.239 --rc genhtml_function_coverage=1 00:08:49.239 --rc genhtml_legend=1 00:08:49.239 --rc geninfo_all_blocks=1 00:08:49.239 --rc geninfo_unexecuted_blocks=1 00:08:49.239 00:08:49.239 ' 00:08:49.239 21:39:56 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:49.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.239 --rc genhtml_branch_coverage=1 00:08:49.239 --rc genhtml_function_coverage=1 00:08:49.239 --rc genhtml_legend=1 00:08:49.239 --rc geninfo_all_blocks=1 00:08:49.239 --rc geninfo_unexecuted_blocks=1 00:08:49.239 00:08:49.239 ' 00:08:49.239 21:39:56 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:49.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.239 --rc genhtml_branch_coverage=1 00:08:49.239 --rc genhtml_function_coverage=1 00:08:49.239 --rc genhtml_legend=1 00:08:49.239 --rc geninfo_all_blocks=1 00:08:49.239 --rc geninfo_unexecuted_blocks=1 00:08:49.239 00:08:49.239 ' 00:08:49.239 21:39:56 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:49.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.239 --rc genhtml_branch_coverage=1 00:08:49.239 --rc genhtml_function_coverage=1 00:08:49.239 --rc genhtml_legend=1 00:08:49.239 --rc geninfo_all_blocks=1 00:08:49.239 --rc geninfo_unexecuted_blocks=1 00:08:49.239 00:08:49.239 ' 00:08:49.239 21:39:56 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:49.239 21:39:56 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:08:49.239 21:39:56 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:49.239 21:39:56 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:49.239 21:39:56 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:49.239 21:39:56 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:49.239 21:39:56 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:49.239 21:39:56 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:49.239 21:39:56 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:08:49.239 21:39:56 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:08:49.239 21:39:56 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:08:49.239 21:39:56 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:08:49.239 21:39:56 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:08:49.239 21:39:56 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:08:49.239 21:39:56 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:08:49.239 21:39:56 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:08:49.239 21:39:56 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:08:49.239 21:39:56 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:08:49.239 21:39:56 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:08:49.239 21:39:56 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:08:49.239 21:39:56 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:08:49.239 21:39:56 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:08:49.239 21:39:56 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:08:49.239 21:39:56 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:08:49.239 21:39:56 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62301 00:08:49.239 21:39:56 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:49.239 21:39:56 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:49.239 21:39:56 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 62301 00:08:49.239 21:39:56 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 62301 ']' 00:08:49.239 21:39:56 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.239 21:39:56 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.239 21:39:56 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.239 21:39:56 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.239 21:39:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:49.498 [2024-12-10 21:39:57.054037] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:08:49.498 [2024-12-10 21:39:57.054193] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62301 ] 00:08:49.757 [2024-12-10 21:39:57.238322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.757 [2024-12-10 21:39:57.379592] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.695 21:39:58 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:50.695 21:39:58 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:08:50.695 21:39:58 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:08:50.695 21:39:58 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:08:50.695 21:39:58 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:08:50.695 21:39:58 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:50.695 21:39:58 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:50.695 21:39:58 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:50.695 21:39:58 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.695 21:39:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:50.955 21:39:58 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.955 21:39:58 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:08:50.955 21:39:58 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.955 21:39:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:50.955 21:39:58 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.955 21:39:58 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:08:50.955 21:39:58 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:08:50.955 21:39:58 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.955 21:39:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:50.955 21:39:58 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.955 21:39:58 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:08:50.955 21:39:58 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.955 21:39:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:51.228 21:39:58 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.228 21:39:58 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:51.228 21:39:58 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.228 21:39:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:51.228 21:39:58 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.228 21:39:58 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:08:51.228 21:39:58 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:08:51.228 21:39:58 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.228 21:39:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:51.228 21:39:58 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:08:51.228 21:39:58 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.228 21:39:58 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:08:51.228 21:39:58 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:08:51.229 21:39:58 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "89389769-fcb1-4ec1-aca3-5e922dd50617"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "89389769-fcb1-4ec1-aca3-5e922dd50617",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "2fd2ec40-d7e5-4cea-870e-49b58525b1c2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "2fd2ec40-d7e5-4cea-870e-49b58525b1c2",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "a953611f-b57a-4bea-beec-c7dc308857a0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a953611f-b57a-4bea-beec-c7dc308857a0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "58a40abc-334f-4c5c-bdd1-783e78dfb60a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "58a40abc-334f-4c5c-bdd1-783e78dfb60a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "2edfc5f4-b59c-4cc1-9609-50d68ed4c7d8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2edfc5f4-b59c-4cc1-9609-50d68ed4c7d8",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "362ffd0e-9d94-4391-abf4-a055977ff5fd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "362ffd0e-9d94-4391-abf4-a055977ff5fd",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:51.229 21:39:58 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:08:51.229 21:39:58 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:08:51.229 21:39:58 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:08:51.229 21:39:58 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 62301 00:08:51.229 21:39:58 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 62301 ']' 00:08:51.229 21:39:58 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 62301 00:08:51.229 21:39:58 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:08:51.229 21:39:58 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:51.229 21:39:58 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62301 00:08:51.492 killing process with pid 62301 00:08:51.492 21:39:58 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:51.492 21:39:58 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:51.492 21:39:58 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62301' 00:08:51.492 21:39:58 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 62301 00:08:51.492 21:39:58 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 62301 00:08:54.024 21:40:01 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:54.024 21:40:01 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:54.024 21:40:01 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:08:54.024 21:40:01 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.024 21:40:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:54.024 ************************************ 00:08:54.024 START TEST bdev_hello_world 00:08:54.024 ************************************ 00:08:54.025 21:40:01 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:54.025 [2024-12-10 21:40:01.500741] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:08:54.025 [2024-12-10 21:40:01.500880] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62400 ] 00:08:54.025 [2024-12-10 21:40:01.673509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.282 [2024-12-10 21:40:01.808314] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.847 [2024-12-10 21:40:02.492404] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:54.847 [2024-12-10 21:40:02.492455] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:54.847 [2024-12-10 21:40:02.492482] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:54.847 [2024-12-10 21:40:02.495623] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:54.847 [2024-12-10 21:40:02.496275] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:54.847 [2024-12-10 21:40:02.496310] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:54.847 [2024-12-10 21:40:02.496474] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:54.847 00:08:54.847 [2024-12-10 21:40:02.496497] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:56.251 00:08:56.252 real 0m2.262s 00:08:56.252 user 0m1.863s 00:08:56.252 sys 0m0.291s 00:08:56.252 21:40:03 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.252 21:40:03 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:56.252 ************************************ 00:08:56.252 END TEST bdev_hello_world 00:08:56.252 ************************************ 00:08:56.252 21:40:03 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:08:56.252 21:40:03 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:56.252 21:40:03 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.252 21:40:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:56.252 ************************************ 00:08:56.252 START TEST bdev_bounds 00:08:56.252 ************************************ 00:08:56.252 21:40:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:08:56.252 21:40:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62443 00:08:56.252 21:40:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:56.252 21:40:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:56.252 21:40:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62443' 00:08:56.252 Process bdevio pid: 62443 00:08:56.252 21:40:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62443 00:08:56.252 21:40:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 62443 ']' 00:08:56.252 21:40:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.252 21:40:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.252 21:40:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.252 21:40:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.252 21:40:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:56.252 [2024-12-10 21:40:03.839621] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:08:56.252 [2024-12-10 21:40:03.839981] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62443 ] 00:08:56.510 [2024-12-10 21:40:04.025765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:56.510 [2024-12-10 21:40:04.173228] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.510 [2024-12-10 21:40:04.173378] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.510 [2024-12-10 21:40:04.173408] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:57.455 21:40:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:57.455 21:40:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:08:57.455 21:40:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:57.455 I/O targets: 00:08:57.455 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:57.455 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:57.455 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:57.455 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:57.455 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:57.455 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:57.455 00:08:57.455 00:08:57.455 CUnit - A unit testing framework for C - Version 2.1-3 00:08:57.455 http://cunit.sourceforge.net/ 00:08:57.455 00:08:57.455 00:08:57.455 Suite: bdevio tests on: Nvme3n1 00:08:57.455 Test: blockdev write read block ...passed 00:08:57.455 Test: blockdev write zeroes read block ...passed 00:08:57.455 Test: blockdev write zeroes read no split ...passed 00:08:57.455 Test: blockdev write zeroes read split ...passed 00:08:57.455 Test: blockdev write zeroes read split partial ...passed 00:08:57.455 Test: blockdev reset ...[2024-12-10 21:40:05.053784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:08:57.455 passed 00:08:57.455 Test: blockdev write read 8 blocks ...[2024-12-10 21:40:05.057610] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:08:57.455 passed 00:08:57.455 Test: blockdev write read size > 128k ...passed 00:08:57.455 Test: blockdev write read invalid size ...passed 00:08:57.455 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:57.455 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:57.455 Test: blockdev write read max offset ...passed 00:08:57.455 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:57.455 Test: blockdev writev readv 8 blocks ...passed 00:08:57.455 Test: blockdev writev readv 30 x 1block ...passed 00:08:57.455 Test: blockdev writev readv block ...passed 00:08:57.455 Test: blockdev writev readv size > 128k ...passed 00:08:57.455 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:57.455 Test: blockdev comparev and writev ...[2024-12-10 21:40:05.066526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b0a0a000 len:0x1000 00:08:57.455 [2024-12-10 21:40:05.066582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:57.455 passed 00:08:57.455 Test: blockdev nvme passthru rw ...passed 00:08:57.455 Test: blockdev nvme passthru vendor specific ...[2024-12-10 21:40:05.067464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:08:57.455 Test: blockdev nvme admin passthru ...RP2 0x0 00:08:57.455 [2024-12-10 21:40:05.067626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:57.455 passed 00:08:57.455 Test: blockdev copy ...passed 00:08:57.455 Suite: bdevio tests on: Nvme2n3 00:08:57.455 Test: blockdev write read block ...passed 00:08:57.455 Test: blockdev write zeroes read block ...passed 00:08:57.455 Test: blockdev write zeroes read no split ...passed 00:08:57.455 Test: blockdev write zeroes read split ...passed 00:08:57.455 Test: blockdev write zeroes read split partial ...passed 00:08:57.455 Test: blockdev reset ...[2024-12-10 21:40:05.162068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:57.455 [2024-12-10 21:40:05.166673] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:08:57.455 Test: blockdev write read 8 blocks ...uccessful. 00:08:57.455 passed 00:08:57.455 Test: blockdev write read size > 128k ...passed 00:08:57.455 Test: blockdev write read invalid size ...passed 00:08:57.455 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:57.455 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:57.455 Test: blockdev write read max offset ...passed 00:08:57.455 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:57.455 Test: blockdev writev readv 8 blocks ...passed 00:08:57.455 Test: blockdev writev readv 30 x 1block ...passed 00:08:57.455 Test: blockdev writev readv block ...passed 00:08:57.455 Test: blockdev writev readv size > 128k ...passed 00:08:57.455 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:57.455 Test: blockdev comparev and writev ...[2024-12-10 21:40:05.176568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x293406000 len:0x1000 00:08:57.455 [2024-12-10 21:40:05.176622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:57.455 passed 00:08:57.455 Test: blockdev nvme passthru rw ...passed 00:08:57.455 Test: blockdev nvme passthru vendor specific ...passed 00:08:57.455 Test: blockdev nvme admin passthru ...[2024-12-10 21:40:05.177499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:57.455 [2024-12-10 21:40:05.177539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:57.455 passed 00:08:57.455 Test: blockdev copy ...passed 00:08:57.455 Suite: bdevio tests on: Nvme2n2 00:08:57.455 Test: blockdev write read block ...passed 00:08:57.722 Test: blockdev write zeroes read block ...passed 00:08:57.722 Test: blockdev write zeroes read no split ...passed 00:08:57.722 Test: blockdev write zeroes read split ...passed 00:08:57.722 Test: blockdev write zeroes read split partial ...passed 00:08:57.722 Test: blockdev reset ...[2024-12-10 21:40:05.255062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:57.722 [2024-12-10 21:40:05.259219] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:08:57.722 Test: blockdev write read 8 blocks ...uccessful. 00:08:57.722 passed 00:08:57.722 Test: blockdev write read size > 128k ...passed 00:08:57.722 Test: blockdev write read invalid size ...passed 00:08:57.722 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:57.722 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:57.722 Test: blockdev write read max offset ...passed 00:08:57.722 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:57.722 Test: blockdev writev readv 8 blocks ...passed 00:08:57.722 Test: blockdev writev readv 30 x 1block ...passed 00:08:57.722 Test: blockdev writev readv block ...passed 00:08:57.722 Test: blockdev writev readv size > 128k ...passed 00:08:57.722 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:57.722 Test: blockdev comparev and writev ...[2024-12-10 21:40:05.269706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c0a3c000 len:0x1000 00:08:57.722 [2024-12-10 21:40:05.269887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:57.722 passed 00:08:57.722 Test: blockdev nvme passthru rw ...passed 00:08:57.722 Test: blockdev nvme passthru vendor specific ...[2024-12-10 21:40:05.271021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:57.722 [2024-12-10 21:40:05.271182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:57.722 passed 00:08:57.722 Test: blockdev nvme admin passthru ...passed 00:08:57.722 Test: blockdev copy ...passed 00:08:57.722 Suite: bdevio tests on: Nvme2n1 00:08:57.722 Test: blockdev write read block ...passed 00:08:57.722 Test: blockdev write zeroes read block ...passed 00:08:57.722 Test: blockdev write zeroes read no split ...passed 00:08:57.722 Test: blockdev write zeroes read split ...passed 00:08:57.722 Test: blockdev write zeroes read split partial ...passed 00:08:57.722 Test: blockdev reset ...[2024-12-10 21:40:05.347672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:57.722 passed 00:08:57.722 Test: blockdev write read 8 blocks ...[2024-12-10 21:40:05.351757] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:57.722 passed 00:08:57.722 Test: blockdev write read size > 128k ...passed 00:08:57.722 Test: blockdev write read invalid size ...passed 00:08:57.722 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:57.722 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:57.722 Test: blockdev write read max offset ...passed 00:08:57.722 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:57.722 Test: blockdev writev readv 8 blocks ...passed 00:08:57.722 Test: blockdev writev readv 30 x 1block ...passed 00:08:57.722 Test: blockdev writev readv block ...passed 00:08:57.722 Test: blockdev writev readv size > 128k ...passed 00:08:57.722 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:57.722 Test: blockdev comparev and writev ...[2024-12-10 21:40:05.360370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c0a38000 len:0x1000 00:08:57.722 [2024-12-10 21:40:05.360429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:57.722 passed 00:08:57.722 Test: blockdev nvme passthru rw ...passed 00:08:57.722 Test: blockdev nvme passthru vendor specific ...passed 00:08:57.722 Test: blockdev nvme admin passthru ...[2024-12-10 21:40:05.361304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:57.722 [2024-12-10 21:40:05.361343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:57.722 passed 00:08:57.722 Test: blockdev copy ...passed 00:08:57.722 Suite: bdevio tests on: Nvme1n1 00:08:57.722 Test: blockdev write read block ...passed 00:08:57.722 Test: blockdev write zeroes read block ...passed 00:08:57.722 Test: blockdev write zeroes read no split ...passed 00:08:57.722 Test: blockdev write zeroes read split ...passed 00:08:57.722 Test: blockdev write zeroes read split partial ...passed 00:08:57.722 Test: blockdev reset ...[2024-12-10 21:40:05.438707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:57.722 passed 00:08:57.723 Test: blockdev write read 8 blocks ...[2024-12-10 21:40:05.442480] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:08:57.723 passed 00:08:57.723 Test: blockdev write read size > 128k ...passed 00:08:57.723 Test: blockdev write read invalid size ...passed 00:08:57.723 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:57.723 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:57.723 Test: blockdev write read max offset ...passed 00:08:57.723 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:57.723 Test: blockdev writev readv 8 blocks ...passed 00:08:57.723 Test: blockdev writev readv 30 x 1block ...passed 00:08:57.723 Test: blockdev writev readv block ...passed 00:08:57.723 Test: blockdev writev readv size > 128k ...passed 00:08:57.723 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:57.723 Test: blockdev comparev and writev ...[2024-12-10 21:40:05.451379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c0a34000 len:0x1000 00:08:57.723 [2024-12-10 21:40:05.451433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:57.723 passed 00:08:57.723 Test: blockdev nvme passthru rw ...passed 00:08:57.723 Test: blockdev nvme passthru vendor specific ...passed 00:08:57.723 Test: blockdev nvme admin passthru ...[2024-12-10 21:40:05.452294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:57.982 [2024-12-10 21:40:05.452338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:57.982 passed 00:08:57.982 Test: blockdev copy ...passed 00:08:57.982 Suite: bdevio tests on: Nvme0n1 00:08:57.982 Test: blockdev write read block ...passed 00:08:57.982 Test: blockdev write zeroes read block ...passed 00:08:57.982 Test: blockdev write zeroes read no split ...passed 00:08:57.982 Test: blockdev write zeroes read split ...passed 00:08:57.982 Test: blockdev write zeroes read split partial ...passed 00:08:57.982 Test: blockdev reset ...[2024-12-10 21:40:05.532319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:57.982 [2024-12-10 21:40:05.536194] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:08:57.982 Test: blockdev write read 8 blocks ...uccessful. 00:08:57.982 passed 00:08:57.982 Test: blockdev write read size > 128k ...passed 00:08:57.982 Test: blockdev write read invalid size ...passed 00:08:57.982 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:57.982 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:57.982 Test: blockdev write read max offset ...passed 00:08:57.982 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:57.982 Test: blockdev writev readv 8 blocks ...passed 00:08:57.982 Test: blockdev writev readv 30 x 1block ...passed 00:08:57.982 Test: blockdev writev readv block ...passed 00:08:57.982 Test: blockdev writev readv size > 128k ...passed 00:08:57.982 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:57.982 Test: blockdev comparev and writev ...passed 00:08:57.982 Test: blockdev nvme passthru rw ...[2024-12-10 21:40:05.545194] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:57.982 separate metadata which is not supported yet. 00:08:57.982 passed 00:08:57.982 Test: blockdev nvme passthru vendor specific ...[2024-12-10 21:40:05.545978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:57.982 [2024-12-10 21:40:05.546186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:57.982 passed 00:08:57.982 Test: blockdev nvme admin passthru ...passed 00:08:57.982 Test: blockdev copy ...passed 00:08:57.982 00:08:57.982 Run Summary: Type Total Ran Passed Failed Inactive 00:08:57.982 suites 6 6 n/a 0 0 00:08:57.982 tests 138 138 138 0 0 00:08:57.982 asserts 893 893 893 0 n/a 00:08:57.982 00:08:57.982 Elapsed time = 1.541 seconds 00:08:57.982 0 00:08:57.982 21:40:05 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62443 00:08:57.982 21:40:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 62443 ']' 00:08:57.982 21:40:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 62443 00:08:57.982 21:40:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:08:57.982 21:40:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:57.982 21:40:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62443 00:08:57.982 killing process with pid 62443 00:08:57.982 21:40:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:57.982 21:40:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:57.982 21:40:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62443' 00:08:57.982 21:40:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 62443 00:08:57.982 21:40:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 62443 00:08:59.358 21:40:06 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:08:59.358 00:08:59.358 real 0m2.966s 00:08:59.358 user 0m7.484s 00:08:59.358 sys 0m0.449s 00:08:59.358 21:40:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.358 21:40:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:59.358 ************************************ 00:08:59.358 END TEST bdev_bounds 00:08:59.358 ************************************ 00:08:59.358 21:40:06 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:59.358 21:40:06 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:59.358 21:40:06 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.358 21:40:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:59.358 ************************************ 00:08:59.358 START TEST bdev_nbd 00:08:59.358 ************************************ 00:08:59.358 21:40:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:59.358 21:40:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:59.358 21:40:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:59.358 21:40:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:59.358 21:40:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:59.358 21:40:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:59.358 21:40:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:59.358 21:40:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:08:59.358 21:40:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:59.358 21:40:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:59.358 21:40:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:59.358 21:40:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:08:59.358 21:40:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:59.358 21:40:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:59.358 21:40:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:59.358 21:40:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:59.358 21:40:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62508 00:08:59.358 21:40:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:59.358 21:40:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62508 /var/tmp/spdk-nbd.sock 00:08:59.358 21:40:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:59.358 21:40:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 62508 ']' 00:08:59.358 21:40:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:59.358 21:40:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:59.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:59.358 21:40:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:59.358 21:40:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:59.358 21:40:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:59.358 [2024-12-10 21:40:06.890592] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:08:59.358 [2024-12-10 21:40:06.891224] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.358 [2024-12-10 21:40:07.077093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.616 [2024-12-10 21:40:07.213944] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.553 21:40:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:00.553 21:40:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:09:00.554 21:40:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:00.554 21:40:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:00.554 21:40:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:00.554 21:40:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:00.554 21:40:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:00.554 21:40:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:00.554 21:40:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:00.554 21:40:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:00.554 21:40:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:09:00.554 21:40:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:00.554 21:40:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:00.554 21:40:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:00.554 21:40:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:09:00.554 21:40:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:00.554 21:40:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:00.554 21:40:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:00.554 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:00.554 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:00.554 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:00.554 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:00.554 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:00.554 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:00.554 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:00.554 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:00.554 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:00.554 1+0 records in 00:09:00.554 1+0 records out 00:09:00.554 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054557 s, 7.5 MB/s 00:09:00.554 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:00.554 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:00.554 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:00.554 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:00.554 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:00.554 21:40:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:00.554 21:40:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:00.554 21:40:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:09:00.813 21:40:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:00.813 21:40:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:00.813 21:40:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:00.813 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:00.813 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:00.813 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:00.813 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:00.813 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:00.813 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:00.813 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:00.813 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:00.813 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:00.813 1+0 records in 00:09:00.813 1+0 records out 00:09:00.813 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000675674 s, 6.1 MB/s 00:09:00.813 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:00.813 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:00.813 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:00.813 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:00.813 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:00.813 21:40:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:00.813 21:40:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:00.813 21:40:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:01.073 21:40:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:01.073 21:40:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:01.073 21:40:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:01.073 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:09:01.073 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:01.073 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:01.073 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:01.073 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:09:01.073 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:01.073 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:01.073 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:01.073 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:01.073 1+0 records in 00:09:01.073 1+0 records out 00:09:01.073 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000646541 s, 6.3 MB/s 00:09:01.073 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.073 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:01.073 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.073 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:01.073 21:40:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:01.073 21:40:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:01.073 21:40:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:01.073 21:40:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:01.331 21:40:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:01.331 21:40:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:01.331 21:40:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:01.331 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:09:01.331 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:01.331 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:01.331 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:01.331 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:09:01.331 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:01.331 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:01.331 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:01.331 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:01.331 1+0 records in 00:09:01.331 1+0 records out 00:09:01.331 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000510166 s, 8.0 MB/s 00:09:01.331 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.331 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:01.331 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.331 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:01.331 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:01.331 21:40:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:01.331 21:40:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:01.332 21:40:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:01.590 21:40:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:01.590 21:40:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:01.590 21:40:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:01.590 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:09:01.590 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:01.590 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:01.590 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:01.590 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:09:01.590 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:01.590 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:01.590 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:01.590 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:01.590 1+0 records in 00:09:01.590 1+0 records out 00:09:01.590 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000908124 s, 4.5 MB/s 00:09:01.590 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.590 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:01.590 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.590 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:01.590 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:01.590 21:40:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:01.590 21:40:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:01.590 21:40:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:01.850 21:40:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:01.850 21:40:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:01.850 21:40:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:01.850 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:09:01.850 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:01.850 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:01.850 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:01.850 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:09:01.850 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:01.850 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:01.850 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:01.850 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:01.850 1+0 records in 00:09:01.850 1+0 records out 00:09:01.850 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000791748 s, 5.2 MB/s 00:09:01.850 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.850 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:01.850 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.850 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:01.850 21:40:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:01.850 21:40:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:01.850 21:40:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:01.850 21:40:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:02.109 21:40:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:02.109 { 00:09:02.109 "nbd_device": "/dev/nbd0", 00:09:02.109 "bdev_name": "Nvme0n1" 00:09:02.109 }, 00:09:02.109 { 00:09:02.109 "nbd_device": "/dev/nbd1", 00:09:02.109 "bdev_name": "Nvme1n1" 00:09:02.109 }, 00:09:02.109 { 00:09:02.109 "nbd_device": "/dev/nbd2", 00:09:02.109 "bdev_name": "Nvme2n1" 00:09:02.109 }, 00:09:02.109 { 00:09:02.109 "nbd_device": "/dev/nbd3", 00:09:02.109 "bdev_name": "Nvme2n2" 00:09:02.109 }, 00:09:02.109 { 00:09:02.109 "nbd_device": "/dev/nbd4", 00:09:02.109 "bdev_name": "Nvme2n3" 00:09:02.109 }, 00:09:02.109 { 00:09:02.109 "nbd_device": "/dev/nbd5", 00:09:02.109 "bdev_name": "Nvme3n1" 00:09:02.109 } 00:09:02.109 ]' 00:09:02.109 21:40:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:02.109 21:40:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:02.109 { 00:09:02.109 "nbd_device": "/dev/nbd0", 00:09:02.109 "bdev_name": "Nvme0n1" 00:09:02.109 }, 00:09:02.109 { 00:09:02.109 "nbd_device": "/dev/nbd1", 00:09:02.109 "bdev_name": "Nvme1n1" 00:09:02.109 }, 00:09:02.109 { 00:09:02.109 "nbd_device": "/dev/nbd2", 00:09:02.109 "bdev_name": "Nvme2n1" 00:09:02.109 }, 00:09:02.109 { 00:09:02.109 "nbd_device": "/dev/nbd3", 00:09:02.109 "bdev_name": "Nvme2n2" 00:09:02.109 }, 00:09:02.109 { 00:09:02.109 "nbd_device": "/dev/nbd4", 00:09:02.109 "bdev_name": "Nvme2n3" 00:09:02.109 }, 00:09:02.109 { 00:09:02.109 "nbd_device": "/dev/nbd5", 00:09:02.109 "bdev_name": "Nvme3n1" 00:09:02.109 } 00:09:02.109 ]' 00:09:02.109 21:40:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:02.368 21:40:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:09:02.368 21:40:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:02.368 21:40:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:09:02.368 21:40:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:02.368 21:40:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:02.368 21:40:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:02.368 21:40:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:02.368 21:40:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:02.368 21:40:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:02.368 21:40:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:02.368 21:40:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:02.368 21:40:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:02.368 21:40:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:02.368 21:40:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:02.368 21:40:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:02.368 21:40:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:02.368 21:40:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:02.627 21:40:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:02.627 21:40:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:02.627 21:40:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:02.627 21:40:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:02.627 21:40:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:02.627 21:40:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:02.627 21:40:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:02.627 21:40:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:02.627 21:40:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:02.627 21:40:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:02.886 21:40:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:02.886 21:40:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:02.886 21:40:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:02.886 21:40:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:02.886 21:40:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:02.886 21:40:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:02.886 21:40:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:02.886 21:40:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:02.886 21:40:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:02.886 21:40:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:03.146 21:40:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:03.146 21:40:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:03.146 21:40:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:03.146 21:40:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:03.146 21:40:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:03.146 21:40:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:03.146 21:40:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:03.146 21:40:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:03.146 21:40:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:03.146 21:40:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:03.405 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:03.405 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:03.405 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:03.405 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:03.405 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:03.405 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:03.405 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:03.405 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:03.405 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:03.405 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:03.663 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:03.663 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:03.663 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:03.663 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:03.663 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:03.663 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:03.663 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:03.663 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:03.663 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:03.663 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:03.663 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:03.922 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:03.922 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:03.922 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:03.922 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:03.922 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:03.922 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:03.922 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:03.922 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:03.922 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:03.922 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:09:03.922 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:03.922 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:09:03.922 21:40:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:03.922 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:03.922 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:03.922 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:03.922 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:03.922 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:03.922 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:03.922 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:03.922 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:03.922 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:03.922 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:03.922 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:03.922 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:09:03.922 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:03.922 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:03.922 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:09:04.181 /dev/nbd0 00:09:04.181 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:04.181 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:04.181 21:40:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:04.181 21:40:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:04.181 21:40:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:04.181 21:40:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:04.181 21:40:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:04.181 21:40:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:04.181 21:40:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:04.181 21:40:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:04.181 21:40:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:04.181 1+0 records in 00:09:04.181 1+0 records out 00:09:04.181 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000473662 s, 8.6 MB/s 00:09:04.181 21:40:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:04.181 21:40:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:04.181 21:40:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:04.181 21:40:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:04.181 21:40:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:04.181 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:04.181 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:04.181 21:40:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:09:04.440 /dev/nbd1 00:09:04.440 21:40:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:04.440 21:40:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:04.440 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:04.440 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:04.440 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:04.440 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:04.440 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:04.440 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:04.440 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:04.440 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:04.440 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:04.440 1+0 records in 00:09:04.440 1+0 records out 00:09:04.440 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000955419 s, 4.3 MB/s 00:09:04.440 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:04.440 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:04.440 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:04.440 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:04.440 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:04.440 21:40:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:04.440 21:40:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:04.440 21:40:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:09:04.699 /dev/nbd10 00:09:04.699 21:40:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:04.699 21:40:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:04.699 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:09:04.699 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:04.699 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:04.699 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:04.699 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:09:04.699 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:04.699 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:04.699 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:04.700 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:04.700 1+0 records in 00:09:04.700 1+0 records out 00:09:04.700 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000875354 s, 4.7 MB/s 00:09:04.700 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:04.700 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:04.700 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:04.700 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:04.700 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:04.700 21:40:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:04.700 21:40:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:04.700 21:40:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:09:04.958 /dev/nbd11 00:09:04.958 21:40:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:04.958 21:40:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:04.958 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:09:04.958 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:04.958 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:04.958 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:04.958 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:09:04.958 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:04.958 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:04.958 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:04.958 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:04.958 1+0 records in 00:09:04.958 1+0 records out 00:09:04.958 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000812829 s, 5.0 MB/s 00:09:04.958 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:04.958 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:04.958 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:04.958 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:04.958 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:04.958 21:40:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:04.958 21:40:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:04.958 21:40:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:09:05.217 /dev/nbd12 00:09:05.217 21:40:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:05.217 21:40:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:05.217 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:09:05.217 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:05.217 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:05.217 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:05.217 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:09:05.217 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:05.217 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:05.217 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:05.217 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:05.217 1+0 records in 00:09:05.217 1+0 records out 00:09:05.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000733173 s, 5.6 MB/s 00:09:05.217 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:05.217 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:05.217 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:05.217 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:05.217 21:40:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:05.217 21:40:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:05.217 21:40:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:05.217 21:40:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:09:05.477 /dev/nbd13 00:09:05.477 21:40:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:05.477 21:40:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:05.477 21:40:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:09:05.477 21:40:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:05.477 21:40:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:05.477 21:40:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:05.477 21:40:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:09:05.477 21:40:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:05.477 21:40:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:05.477 21:40:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:05.477 21:40:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:05.477 1+0 records in 00:09:05.477 1+0 records out 00:09:05.477 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00260046 s, 1.6 MB/s 00:09:05.477 21:40:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:05.477 21:40:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:05.477 21:40:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:05.477 21:40:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:05.477 21:40:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:05.477 21:40:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:05.477 21:40:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:05.477 21:40:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:05.477 21:40:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:05.736 21:40:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:05.736 21:40:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:05.736 { 00:09:05.736 "nbd_device": "/dev/nbd0", 00:09:05.736 "bdev_name": "Nvme0n1" 00:09:05.736 }, 00:09:05.736 { 00:09:05.736 "nbd_device": "/dev/nbd1", 00:09:05.736 "bdev_name": "Nvme1n1" 00:09:05.736 }, 00:09:05.736 { 00:09:05.736 "nbd_device": "/dev/nbd10", 00:09:05.736 "bdev_name": "Nvme2n1" 00:09:05.736 }, 00:09:05.736 { 00:09:05.736 "nbd_device": "/dev/nbd11", 00:09:05.736 "bdev_name": "Nvme2n2" 00:09:05.736 }, 00:09:05.736 { 00:09:05.736 "nbd_device": "/dev/nbd12", 00:09:05.736 "bdev_name": "Nvme2n3" 00:09:05.736 }, 00:09:05.736 { 00:09:05.736 "nbd_device": "/dev/nbd13", 00:09:05.736 "bdev_name": "Nvme3n1" 00:09:05.736 } 00:09:05.736 ]' 00:09:05.736 21:40:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:05.736 21:40:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:05.736 { 00:09:05.736 "nbd_device": "/dev/nbd0", 00:09:05.736 "bdev_name": "Nvme0n1" 00:09:05.736 }, 00:09:05.736 { 00:09:05.736 "nbd_device": "/dev/nbd1", 00:09:05.736 "bdev_name": "Nvme1n1" 00:09:05.736 }, 00:09:05.736 { 00:09:05.736 "nbd_device": "/dev/nbd10", 00:09:05.736 "bdev_name": "Nvme2n1" 00:09:05.736 }, 00:09:05.736 { 00:09:05.736 "nbd_device": "/dev/nbd11", 00:09:05.736 "bdev_name": "Nvme2n2" 00:09:05.736 }, 00:09:05.736 { 00:09:05.736 "nbd_device": "/dev/nbd12", 00:09:05.736 "bdev_name": "Nvme2n3" 00:09:05.736 }, 00:09:05.736 { 00:09:05.736 "nbd_device": "/dev/nbd13", 00:09:05.736 "bdev_name": "Nvme3n1" 00:09:05.736 } 00:09:05.736 ]' 00:09:05.995 21:40:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:05.995 /dev/nbd1 00:09:05.995 /dev/nbd10 00:09:05.995 /dev/nbd11 00:09:05.995 /dev/nbd12 00:09:05.995 /dev/nbd13' 00:09:05.995 21:40:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:05.995 /dev/nbd1 00:09:05.995 /dev/nbd10 00:09:05.995 /dev/nbd11 00:09:05.995 /dev/nbd12 00:09:05.995 /dev/nbd13' 00:09:05.995 21:40:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:05.995 21:40:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:09:05.995 21:40:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:09:05.995 21:40:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:09:05.995 21:40:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:09:05.995 21:40:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:09:05.995 21:40:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:05.995 21:40:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:05.995 21:40:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:05.995 21:40:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:05.995 21:40:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:05.995 21:40:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:05.995 256+0 records in 00:09:05.995 256+0 records out 00:09:05.995 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120976 s, 86.7 MB/s 00:09:05.995 21:40:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:05.995 21:40:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:05.995 256+0 records in 00:09:05.995 256+0 records out 00:09:05.995 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.11498 s, 9.1 MB/s 00:09:05.995 21:40:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:05.995 21:40:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:06.254 256+0 records in 00:09:06.254 256+0 records out 00:09:06.254 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121273 s, 8.6 MB/s 00:09:06.254 21:40:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:06.254 21:40:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:06.254 256+0 records in 00:09:06.254 256+0 records out 00:09:06.254 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125144 s, 8.4 MB/s 00:09:06.254 21:40:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:06.254 21:40:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:06.513 256+0 records in 00:09:06.513 256+0 records out 00:09:06.513 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121956 s, 8.6 MB/s 00:09:06.513 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:06.513 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:06.513 256+0 records in 00:09:06.513 256+0 records out 00:09:06.513 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.118396 s, 8.9 MB/s 00:09:06.513 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:06.513 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:06.772 256+0 records in 00:09:06.772 256+0 records out 00:09:06.772 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13049 s, 8.0 MB/s 00:09:06.772 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:09:06.772 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:06.773 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:06.773 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:06.773 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:06.773 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:06.773 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:06.773 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:06.773 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:06.773 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:06.773 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:06.773 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:06.773 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:06.773 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:06.773 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:06.773 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:06.773 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:06.773 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:06.773 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:06.773 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:06.773 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:06.773 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:06.773 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:06.773 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:06.773 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:06.773 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:06.773 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:07.032 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:07.032 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:07.032 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:07.032 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:07.032 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:07.032 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:07.032 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:07.032 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:07.032 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:07.032 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:07.292 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:07.292 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:07.292 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:07.292 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:07.292 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:07.292 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:07.292 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:07.292 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:07.292 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:07.292 21:40:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:07.553 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:07.553 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:07.553 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:07.553 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:07.553 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:07.553 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:07.553 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:07.553 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:07.553 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:07.553 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:07.553 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:07.814 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:07.814 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:07.814 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:07.814 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:07.814 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:07.814 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:07.814 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:07.814 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:07.814 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:07.814 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:07.814 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:07.814 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:07.814 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:07.814 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:07.814 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:07.814 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:07.814 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:07.814 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:07.814 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:08.073 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:08.073 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:08.073 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:08.073 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:08.073 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:08.073 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:08.073 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:08.073 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:08.073 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:08.073 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:08.073 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:08.332 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:08.332 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:08.332 21:40:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:08.332 21:40:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:08.332 21:40:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:08.332 21:40:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:08.332 21:40:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:08.332 21:40:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:08.332 21:40:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:08.332 21:40:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:09:08.332 21:40:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:08.332 21:40:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:09:08.332 21:40:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:08.332 21:40:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:08.332 21:40:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:09:08.332 21:40:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:08.591 malloc_lvol_verify 00:09:08.591 21:40:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:08.849 6dc7a87d-a58b-433a-83df-1da3c2ba5c91 00:09:08.849 21:40:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:09.109 00b56c4f-d6c6-4b28-be18-7f62e2e8165f 00:09:09.109 21:40:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:09.368 /dev/nbd0 00:09:09.368 21:40:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:09:09.368 21:40:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:09:09.368 21:40:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:09:09.368 21:40:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:09:09.368 21:40:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:09:09.368 mke2fs 1.47.0 (5-Feb-2023) 00:09:09.368 Discarding device blocks: 0/4096 done 00:09:09.368 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:09.368 00:09:09.368 Allocating group tables: 0/1 done 00:09:09.368 Writing inode tables: 0/1 done 00:09:09.368 Creating journal (1024 blocks): done 00:09:09.368 Writing superblocks and filesystem accounting information: 0/1 done 00:09:09.368 00:09:09.368 21:40:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:09.368 21:40:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:09.368 21:40:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:09.368 21:40:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:09.368 21:40:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:09.368 21:40:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:09.368 21:40:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:09.628 21:40:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:09.628 21:40:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:09.628 21:40:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:09.628 21:40:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:09.628 21:40:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:09.628 21:40:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:09.628 21:40:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:09.628 21:40:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:09.628 21:40:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62508 00:09:09.628 21:40:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 62508 ']' 00:09:09.628 21:40:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 62508 00:09:09.628 21:40:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:09:09.628 21:40:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:09.628 21:40:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62508 00:09:09.628 killing process with pid 62508 00:09:09.628 21:40:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:09.628 21:40:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:09.628 21:40:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62508' 00:09:09.628 21:40:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 62508 00:09:09.628 21:40:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 62508 00:09:11.006 21:40:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:09:11.006 00:09:11.006 real 0m11.740s 00:09:11.006 user 0m15.129s 00:09:11.006 sys 0m4.964s 00:09:11.006 21:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.006 21:40:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:11.006 ************************************ 00:09:11.006 END TEST bdev_nbd 00:09:11.007 ************************************ 00:09:11.007 skipping fio tests on NVMe due to multi-ns failures. 00:09:11.007 21:40:18 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:09:11.007 21:40:18 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:09:11.007 21:40:18 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:11.007 21:40:18 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:11.007 21:40:18 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:11.007 21:40:18 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:09:11.007 21:40:18 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.007 21:40:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:11.007 ************************************ 00:09:11.007 START TEST bdev_verify 00:09:11.007 ************************************ 00:09:11.007 21:40:18 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:11.007 [2024-12-10 21:40:18.695344] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:09:11.007 [2024-12-10 21:40:18.695486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62905 ] 00:09:11.266 [2024-12-10 21:40:18.881088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:11.525 [2024-12-10 21:40:19.021155] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.525 [2024-12-10 21:40:19.021189] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.149 Running I/O for 5 seconds... 00:09:14.458 23488.00 IOPS, 91.75 MiB/s [2024-12-10T21:40:23.122Z] 23936.00 IOPS, 93.50 MiB/s [2024-12-10T21:40:24.059Z] 23530.67 IOPS, 91.92 MiB/s [2024-12-10T21:40:24.992Z] 22976.00 IOPS, 89.75 MiB/s [2024-12-10T21:40:24.992Z] 22886.40 IOPS, 89.40 MiB/s 00:09:17.261 Latency(us) 00:09:17.261 [2024-12-10T21:40:24.992Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:17.261 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:17.261 Verification LBA range: start 0x0 length 0xbd0bd 00:09:17.261 Nvme0n1 : 5.04 1853.16 7.24 0.00 0.00 68838.34 15897.09 65693.92 00:09:17.261 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:17.261 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:09:17.261 Nvme0n1 : 5.05 1913.43 7.47 0.00 0.00 66623.94 7843.26 65272.80 00:09:17.261 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:17.261 Verification LBA range: start 0x0 length 0xa0000 00:09:17.261 Nvme1n1 : 5.04 1852.68 7.24 0.00 0.00 68752.83 16844.59 63167.23 00:09:17.261 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:17.261 Verification LBA range: start 0xa0000 length 0xa0000 00:09:17.261 Nvme1n1 : 5.05 1912.93 7.47 0.00 0.00 66539.60 8159.10 61482.77 00:09:17.261 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:17.261 Verification LBA range: start 0x0 length 0x80000 00:09:17.261 Nvme2n1 : 5.06 1858.09 7.26 0.00 0.00 68410.68 7474.79 62325.00 00:09:17.261 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:17.261 Verification LBA range: start 0x80000 length 0x80000 00:09:17.261 Nvme2n1 : 5.07 1920.21 7.50 0.00 0.00 66232.75 11633.30 60640.54 00:09:17.261 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:17.261 Verification LBA range: start 0x0 length 0x80000 00:09:17.261 Nvme2n2 : 5.06 1857.64 7.26 0.00 0.00 68333.08 7527.43 63588.34 00:09:17.261 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:17.261 Verification LBA range: start 0x80000 length 0x80000 00:09:17.261 Nvme2n2 : 5.07 1919.65 7.50 0.00 0.00 66140.29 11264.82 59377.20 00:09:17.261 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:17.261 Verification LBA range: start 0x0 length 0x80000 00:09:17.261 Nvme2n3 : 5.07 1866.41 7.29 0.00 0.00 68026.63 8211.74 64851.69 00:09:17.261 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:17.261 Verification LBA range: start 0x80000 length 0x80000 00:09:17.261 Nvme2n3 : 5.07 1919.18 7.50 0.00 0.00 66060.15 10948.99 57692.74 00:09:17.261 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:17.261 Verification LBA range: start 0x0 length 0x20000 00:09:17.261 Nvme3n1 : 5.08 1865.94 7.29 0.00 0.00 67941.17 8106.46 65693.92 00:09:17.261 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:17.261 Verification LBA range: start 0x20000 length 0x20000 00:09:17.261 Nvme3n1 : 5.07 1918.73 7.50 0.00 0.00 65985.58 10475.23 56429.39 00:09:17.261 [2024-12-10T21:40:24.992Z] =================================================================================================================== 00:09:17.261 [2024-12-10T21:40:24.992Z] Total : 22658.06 88.51 0.00 0.00 67305.93 7474.79 65693.92 00:09:19.156 00:09:19.156 real 0m7.775s 00:09:19.156 user 0m14.288s 00:09:19.156 sys 0m0.353s 00:09:19.156 21:40:26 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.156 21:40:26 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:09:19.156 ************************************ 00:09:19.156 END TEST bdev_verify 00:09:19.156 ************************************ 00:09:19.156 21:40:26 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:19.156 21:40:26 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:09:19.156 21:40:26 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.156 21:40:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:19.156 ************************************ 00:09:19.156 START TEST bdev_verify_big_io 00:09:19.156 ************************************ 00:09:19.156 21:40:26 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:19.156 [2024-12-10 21:40:26.568172] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:09:19.156 [2024-12-10 21:40:26.568398] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63003 ] 00:09:19.156 [2024-12-10 21:40:26.756895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:19.413 [2024-12-10 21:40:26.894149] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.413 [2024-12-10 21:40:26.894184] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.346 Running I/O for 5 seconds... 00:09:24.787 1852.00 IOPS, 115.75 MiB/s [2024-12-10T21:40:33.455Z] 3236.00 IOPS, 202.25 MiB/s [2024-12-10T21:40:33.721Z] 3772.00 IOPS, 235.75 MiB/s 00:09:25.990 Latency(us) 00:09:25.990 [2024-12-10T21:40:33.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.990 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:25.990 Verification LBA range: start 0x0 length 0xbd0b 00:09:25.990 Nvme0n1 : 5.44 173.60 10.85 0.00 0.00 720675.13 14107.35 795064.85 00:09:25.990 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:25.990 Verification LBA range: start 0xbd0b length 0xbd0b 00:09:25.990 Nvme0n1 : 5.52 185.29 11.58 0.00 0.00 676815.86 20213.51 720948.64 00:09:25.990 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:25.990 Verification LBA range: start 0x0 length 0xa000 00:09:25.990 Nvme1n1 : 5.53 174.69 10.92 0.00 0.00 693538.68 44006.50 815278.37 00:09:25.990 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:25.990 Verification LBA range: start 0xa000 length 0xa000 00:09:25.990 Nvme1n1 : 5.52 185.39 11.59 0.00 0.00 660742.79 66115.03 609774.32 00:09:25.990 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:25.990 Verification LBA range: start 0x0 length 0x8000 00:09:25.990 Nvme2n1 : 5.59 169.16 10.57 0.00 0.00 694128.30 75379.56 1246499.98 00:09:25.991 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:25.991 Verification LBA range: start 0x8000 length 0x8000 00:09:25.991 Nvme2n1 : 5.59 183.50 11.47 0.00 0.00 645748.85 83801.86 626618.91 00:09:25.991 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:25.991 Verification LBA range: start 0x0 length 0x8000 00:09:25.991 Nvme2n2 : 5.62 178.58 11.16 0.00 0.00 649375.58 32636.40 1259975.66 00:09:25.991 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:25.991 Verification LBA range: start 0x8000 length 0x8000 00:09:25.991 Nvme2n2 : 5.59 186.96 11.69 0.00 0.00 624086.79 40005.91 643463.51 00:09:25.991 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:25.991 Verification LBA range: start 0x0 length 0x8000 00:09:25.991 Nvme2n3 : 5.65 193.47 12.09 0.00 0.00 586251.93 16634.04 923083.77 00:09:25.991 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:25.991 Verification LBA range: start 0x8000 length 0x8000 00:09:25.991 Nvme2n3 : 5.63 201.17 12.57 0.00 0.00 573660.58 10159.40 656939.18 00:09:25.991 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:25.991 Verification LBA range: start 0x0 length 0x2000 00:09:25.991 Nvme3n1 : 5.71 221.26 13.83 0.00 0.00 503300.97 352.03 1313878.36 00:09:25.991 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:25.991 Verification LBA range: start 0x2000 length 0x2000 00:09:25.991 Nvme3n1 : 5.63 204.46 12.78 0.00 0.00 551693.91 6527.28 663677.02 00:09:25.991 [2024-12-10T21:40:33.722Z] =================================================================================================================== 00:09:25.991 [2024-12-10T21:40:33.722Z] Total : 2257.54 141.10 0.00 0.00 626390.96 352.03 1313878.36 00:09:27.912 00:09:27.912 real 0m9.034s 00:09:27.912 user 0m16.733s 00:09:27.912 sys 0m0.396s 00:09:27.912 ************************************ 00:09:27.912 END TEST bdev_verify_big_io 00:09:27.912 ************************************ 00:09:27.912 21:40:35 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.912 21:40:35 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:27.912 21:40:35 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:27.912 21:40:35 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:09:27.912 21:40:35 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.912 21:40:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:27.912 ************************************ 00:09:27.912 START TEST bdev_write_zeroes 00:09:27.912 ************************************ 00:09:27.912 21:40:35 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:27.912 [2024-12-10 21:40:35.634200] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:09:27.912 [2024-12-10 21:40:35.634358] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63118 ] 00:09:28.171 [2024-12-10 21:40:35.822122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.430 [2024-12-10 21:40:35.958638] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.996 Running I/O for 1 seconds... 00:09:30.386 75648.00 IOPS, 295.50 MiB/s 00:09:30.386 Latency(us) 00:09:30.386 [2024-12-10T21:40:38.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:30.386 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:30.386 Nvme0n1 : 1.02 12575.16 49.12 0.00 0.00 10152.89 8632.85 29478.04 00:09:30.386 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:30.386 Nvme1n1 : 1.02 12562.51 49.07 0.00 0.00 10150.89 8948.69 29688.60 00:09:30.386 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:30.386 Nvme2n1 : 1.02 12550.56 49.03 0.00 0.00 10118.04 8580.22 26635.51 00:09:30.386 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:30.386 Nvme2n2 : 1.02 12590.19 49.18 0.00 0.00 10055.13 5737.69 23371.87 00:09:30.386 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:30.386 Nvme2n3 : 1.02 12578.71 49.14 0.00 0.00 10032.65 5711.37 21792.69 00:09:30.386 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:30.386 Nvme3n1 : 1.02 12567.31 49.09 0.00 0.00 10012.44 6079.85 20424.07 00:09:30.386 [2024-12-10T21:40:38.117Z] =================================================================================================================== 00:09:30.386 [2024-12-10T21:40:38.117Z] Total : 75424.43 294.63 0.00 0.00 10086.87 5711.37 29688.60 00:09:31.334 00:09:31.334 real 0m3.359s 00:09:31.334 user 0m2.974s 00:09:31.334 sys 0m0.265s 00:09:31.334 21:40:38 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.334 21:40:38 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:31.334 ************************************ 00:09:31.334 END TEST bdev_write_zeroes 00:09:31.334 ************************************ 00:09:31.334 21:40:38 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:31.334 21:40:38 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:09:31.334 21:40:38 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.334 21:40:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:31.334 ************************************ 00:09:31.334 START TEST bdev_json_nonenclosed 00:09:31.334 ************************************ 00:09:31.334 21:40:38 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:31.593 [2024-12-10 21:40:39.086630] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:09:31.593 [2024-12-10 21:40:39.087061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63171 ] 00:09:31.593 [2024-12-10 21:40:39.286405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.851 [2024-12-10 21:40:39.420634] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.851 [2024-12-10 21:40:39.420751] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:31.851 [2024-12-10 21:40:39.420778] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:31.851 [2024-12-10 21:40:39.420793] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:32.110 00:09:32.110 real 0m0.721s 00:09:32.110 user 0m0.451s 00:09:32.110 sys 0m0.162s 00:09:32.110 21:40:39 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.110 21:40:39 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:32.110 ************************************ 00:09:32.110 END TEST bdev_json_nonenclosed 00:09:32.110 ************************************ 00:09:32.110 21:40:39 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:32.110 21:40:39 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:09:32.110 21:40:39 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.110 21:40:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:32.110 ************************************ 00:09:32.110 START TEST bdev_json_nonarray 00:09:32.110 ************************************ 00:09:32.110 21:40:39 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:32.369 [2024-12-10 21:40:39.866760] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:09:32.369 [2024-12-10 21:40:39.866910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63202 ] 00:09:32.369 [2024-12-10 21:40:40.051705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.627 [2024-12-10 21:40:40.182071] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.627 [2024-12-10 21:40:40.182431] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:32.627 [2024-12-10 21:40:40.182470] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:32.627 [2024-12-10 21:40:40.182486] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:32.887 00:09:32.887 real 0m0.687s 00:09:32.887 user 0m0.436s 00:09:32.887 sys 0m0.145s 00:09:32.887 ************************************ 00:09:32.887 END TEST bdev_json_nonarray 00:09:32.887 ************************************ 00:09:32.887 21:40:40 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.887 21:40:40 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:32.887 21:40:40 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:09:32.887 21:40:40 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:09:32.887 21:40:40 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:09:32.887 21:40:40 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:09:32.887 21:40:40 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:09:32.887 21:40:40 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:32.887 21:40:40 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:32.887 21:40:40 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:09:32.887 21:40:40 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:09:32.887 21:40:40 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:09:32.887 21:40:40 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:09:32.887 00:09:32.887 real 0m43.845s 00:09:32.887 user 1m4.170s 00:09:32.887 sys 0m8.244s 00:09:32.887 21:40:40 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.887 21:40:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:32.887 ************************************ 00:09:32.887 END TEST blockdev_nvme 00:09:32.887 ************************************ 00:09:32.887 21:40:40 -- spdk/autotest.sh@209 -- # uname -s 00:09:32.887 21:40:40 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:09:32.887 21:40:40 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:32.887 21:40:40 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:32.887 21:40:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.887 21:40:40 -- common/autotest_common.sh@10 -- # set +x 00:09:33.147 ************************************ 00:09:33.147 START TEST blockdev_nvme_gpt 00:09:33.147 ************************************ 00:09:33.147 21:40:40 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:33.147 * Looking for test storage... 00:09:33.147 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:33.147 21:40:40 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:33.147 21:40:40 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:09:33.147 21:40:40 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:33.147 21:40:40 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:33.147 21:40:40 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:33.147 21:40:40 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:33.147 21:40:40 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:33.147 21:40:40 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.147 21:40:40 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:09:33.147 21:40:40 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:09:33.147 21:40:40 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:09:33.147 21:40:40 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:09:33.147 21:40:40 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:09:33.147 21:40:40 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:09:33.147 21:40:40 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:33.147 21:40:40 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:09:33.147 21:40:40 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:09:33.147 21:40:40 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:33.147 21:40:40 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.147 21:40:40 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:09:33.147 21:40:40 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:09:33.147 21:40:40 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.147 21:40:40 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:09:33.147 21:40:40 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:09:33.147 21:40:40 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:09:33.147 21:40:40 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:09:33.147 21:40:40 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.147 21:40:40 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:09:33.147 21:40:40 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:09:33.147 21:40:40 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:33.147 21:40:40 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:33.147 21:40:40 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:09:33.147 21:40:40 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.147 21:40:40 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:33.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.147 --rc genhtml_branch_coverage=1 00:09:33.147 --rc genhtml_function_coverage=1 00:09:33.147 --rc genhtml_legend=1 00:09:33.147 --rc geninfo_all_blocks=1 00:09:33.147 --rc geninfo_unexecuted_blocks=1 00:09:33.147 00:09:33.147 ' 00:09:33.147 21:40:40 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:33.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.147 --rc genhtml_branch_coverage=1 00:09:33.147 --rc genhtml_function_coverage=1 00:09:33.147 --rc genhtml_legend=1 00:09:33.147 --rc geninfo_all_blocks=1 00:09:33.147 --rc geninfo_unexecuted_blocks=1 00:09:33.147 00:09:33.147 ' 00:09:33.147 21:40:40 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:33.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.147 --rc genhtml_branch_coverage=1 00:09:33.147 --rc genhtml_function_coverage=1 00:09:33.147 --rc genhtml_legend=1 00:09:33.147 --rc geninfo_all_blocks=1 00:09:33.147 --rc geninfo_unexecuted_blocks=1 00:09:33.147 00:09:33.147 ' 00:09:33.147 21:40:40 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:33.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.147 --rc genhtml_branch_coverage=1 00:09:33.147 --rc genhtml_function_coverage=1 00:09:33.147 --rc genhtml_legend=1 00:09:33.147 --rc geninfo_all_blocks=1 00:09:33.147 --rc geninfo_unexecuted_blocks=1 00:09:33.147 00:09:33.147 ' 00:09:33.147 21:40:40 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:33.147 21:40:40 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:09:33.147 21:40:40 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:33.147 21:40:40 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:33.147 21:40:40 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:09:33.147 21:40:40 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:09:33.147 21:40:40 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:09:33.147 21:40:40 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:09:33.147 21:40:40 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:09:33.147 21:40:40 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:09:33.147 21:40:40 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:09:33.147 21:40:40 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:09:33.147 21:40:40 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:09:33.147 21:40:40 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:09:33.147 21:40:40 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:09:33.147 21:40:40 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:09:33.147 21:40:40 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:09:33.147 21:40:40 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:09:33.147 21:40:40 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:09:33.147 21:40:40 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:09:33.147 21:40:40 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:09:33.147 21:40:40 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:09:33.147 21:40:40 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:09:33.147 21:40:40 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:09:33.147 21:40:40 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63287 00:09:33.147 21:40:40 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:33.147 21:40:40 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:33.147 21:40:40 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 63287 00:09:33.147 21:40:40 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 63287 ']' 00:09:33.147 21:40:40 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.147 21:40:40 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:33.147 21:40:40 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.147 21:40:40 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:33.147 21:40:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:33.407 [2024-12-10 21:40:40.988177] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:09:33.407 [2024-12-10 21:40:40.989081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63287 ] 00:09:33.666 [2024-12-10 21:40:41.176901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.666 [2024-12-10 21:40:41.303696] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.605 21:40:42 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.605 21:40:42 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:09:34.605 21:40:42 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:09:34.605 21:40:42 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:09:34.605 21:40:42 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:35.173 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:35.432 Waiting for block devices as requested 00:09:35.432 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:35.432 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:35.691 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:35.691 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:40.988 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:40.988 21:40:48 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:09:40.988 21:40:48 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:40.988 21:40:48 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:09:40.988 21:40:48 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:09:40.989 21:40:48 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:09:40.989 21:40:48 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:09:40.989 21:40:48 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:09:40.989 21:40:48 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:09:40.989 21:40:48 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:09:40.989 21:40:48 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:09:40.989 BYT; 00:09:40.989 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:09:40.989 21:40:48 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:09:40.989 BYT; 00:09:40.989 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:09:40.989 21:40:48 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:09:40.989 21:40:48 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:09:40.989 21:40:48 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:09:40.989 21:40:48 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:09:40.989 21:40:48 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:40.989 21:40:48 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:09:40.989 21:40:48 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:09:40.989 21:40:48 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:09:40.989 21:40:48 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:40.989 21:40:48 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:40.989 21:40:48 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:09:40.989 21:40:48 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:09:40.989 21:40:48 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:40.989 21:40:48 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:09:40.989 21:40:48 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:40.989 21:40:48 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:40.989 21:40:48 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:40.989 21:40:48 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:09:40.989 21:40:48 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:09:40.989 21:40:48 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:40.989 21:40:48 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:40.989 21:40:48 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:09:40.989 21:40:48 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:09:40.989 21:40:48 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:40.989 21:40:48 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:09:40.989 21:40:48 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:40.989 21:40:48 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:40.989 21:40:48 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:40.989 21:40:48 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:09:41.926 The operation has completed successfully. 00:09:41.926 21:40:49 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:09:43.303 The operation has completed successfully. 00:09:43.303 21:40:50 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:43.868 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:44.436 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:44.436 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:44.436 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:44.436 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:44.695 21:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:09:44.695 21:40:52 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.695 21:40:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:44.695 [] 00:09:44.695 21:40:52 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.695 21:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:09:44.695 21:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:09:44.695 21:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:09:44.695 21:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:44.695 21:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:09:44.695 21:40:52 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.695 21:40:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:44.954 21:40:52 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.954 21:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:09:44.955 21:40:52 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.955 21:40:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:44.955 21:40:52 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.955 21:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:09:44.955 21:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:09:44.955 21:40:52 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.955 21:40:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:44.955 21:40:52 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.955 21:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:09:44.955 21:40:52 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.955 21:40:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:44.955 21:40:52 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.955 21:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:44.955 21:40:52 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.955 21:40:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:44.955 21:40:52 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.214 21:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:09:45.214 21:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:09:45.214 21:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:09:45.214 21:40:52 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.214 21:40:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:45.214 21:40:52 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.214 21:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:09:45.215 21:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "e3a5ea8b-ba47-4087-beea-f3be482fdc22"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "e3a5ea8b-ba47-4087-beea-f3be482fdc22",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "3bbb719f-3f33-4bd2-9332-6b6218486b68"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3bbb719f-3f33-4bd2-9332-6b6218486b68",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "4351e01d-5326-4ead-a70a-d0fa0d12a21a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4351e01d-5326-4ead-a70a-d0fa0d12a21a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "c5035578-7a28-4028-92c0-24311403d1f7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c5035578-7a28-4028-92c0-24311403d1f7",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "fe6da089-bf20-43b7-88f6-d2dae99fa7db"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "fe6da089-bf20-43b7-88f6-d2dae99fa7db",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:45.215 21:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:09:45.215 21:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:09:45.215 21:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:09:45.215 21:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:09:45.215 21:40:52 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 63287 00:09:45.215 21:40:52 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 63287 ']' 00:09:45.215 21:40:52 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 63287 00:09:45.215 21:40:52 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:09:45.215 21:40:52 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:45.215 21:40:52 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63287 00:09:45.215 21:40:52 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:45.215 21:40:52 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:45.215 killing process with pid 63287 00:09:45.215 21:40:52 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63287' 00:09:45.215 21:40:52 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 63287 00:09:45.215 21:40:52 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 63287 00:09:47.748 21:40:55 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:47.748 21:40:55 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:47.748 21:40:55 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:09:47.748 21:40:55 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.748 21:40:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:47.748 ************************************ 00:09:47.748 START TEST bdev_hello_world 00:09:47.748 ************************************ 00:09:47.748 21:40:55 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:47.748 [2024-12-10 21:40:55.408196] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:09:47.748 [2024-12-10 21:40:55.408331] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63929 ] 00:09:48.007 [2024-12-10 21:40:55.591827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.007 [2024-12-10 21:40:55.713180] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.943 [2024-12-10 21:40:56.404447] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:48.943 [2024-12-10 21:40:56.404514] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:09:48.943 [2024-12-10 21:40:56.404545] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:48.943 [2024-12-10 21:40:56.407579] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:48.943 [2024-12-10 21:40:56.408201] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:48.943 [2024-12-10 21:40:56.408246] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:48.943 [2024-12-10 21:40:56.408521] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:48.943 00:09:48.943 [2024-12-10 21:40:56.408558] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:49.879 00:09:49.879 real 0m2.267s 00:09:49.879 user 0m1.881s 00:09:49.879 sys 0m0.277s 00:09:49.879 21:40:57 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.879 ************************************ 00:09:49.879 END TEST bdev_hello_world 00:09:49.879 ************************************ 00:09:49.879 21:40:57 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:50.138 21:40:57 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:09:50.138 21:40:57 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:50.138 21:40:57 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.138 21:40:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:50.138 ************************************ 00:09:50.138 START TEST bdev_bounds 00:09:50.138 ************************************ 00:09:50.138 21:40:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:09:50.138 21:40:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=63976 00:09:50.138 21:40:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:50.138 21:40:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:50.138 Process bdevio pid: 63976 00:09:50.138 21:40:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 63976' 00:09:50.139 21:40:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 63976 00:09:50.139 21:40:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 63976 ']' 00:09:50.139 21:40:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.139 21:40:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:50.139 21:40:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.139 21:40:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:50.139 21:40:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:50.139 [2024-12-10 21:40:57.758308] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:09:50.139 [2024-12-10 21:40:57.758462] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63976 ] 00:09:50.397 [2024-12-10 21:40:57.938451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:50.397 [2024-12-10 21:40:58.066679] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.397 [2024-12-10 21:40:58.066825] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.397 [2024-12-10 21:40:58.066865] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:51.334 21:40:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.334 21:40:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:09:51.334 21:40:58 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:51.334 I/O targets: 00:09:51.334 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:09:51.334 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:09:51.334 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:09:51.334 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:51.334 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:51.334 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:51.334 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:51.334 00:09:51.334 00:09:51.334 CUnit - A unit testing framework for C - Version 2.1-3 00:09:51.334 http://cunit.sourceforge.net/ 00:09:51.334 00:09:51.334 00:09:51.334 Suite: bdevio tests on: Nvme3n1 00:09:51.334 Test: blockdev write read block ...passed 00:09:51.334 Test: blockdev write zeroes read block ...passed 00:09:51.334 Test: blockdev write zeroes read no split ...passed 00:09:51.334 Test: blockdev write zeroes read split ...passed 00:09:51.334 Test: blockdev write zeroes read split partial ...passed 00:09:51.334 Test: blockdev reset ...[2024-12-10 21:40:58.984971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:09:51.334 passed 00:09:51.334 Test: blockdev write read 8 blocks ...[2024-12-10 21:40:58.989181] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:09:51.334 passed 00:09:51.334 Test: blockdev write read size > 128k ...passed 00:09:51.334 Test: blockdev write read invalid size ...passed 00:09:51.334 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:51.334 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:51.334 Test: blockdev write read max offset ...passed 00:09:51.334 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:51.334 Test: blockdev writev readv 8 blocks ...passed 00:09:51.334 Test: blockdev writev readv 30 x 1block ...passed 00:09:51.334 Test: blockdev writev readv block ...passed 00:09:51.334 Test: blockdev writev readv size > 128k ...passed 00:09:51.334 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:51.334 Test: blockdev comparev and writev ...[2024-12-10 21:40:58.998586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ae204000 len:0x1000 00:09:51.334 passed 00:09:51.334 Test: blockdev nvme passthru rw ...[2024-12-10 21:40:58.998837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:51.334 passed 00:09:51.334 Test: blockdev nvme passthru vendor specific ...[2024-12-10 21:40:58.999754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:51.334 passed 00:09:51.335 Test: blockdev nvme admin passthru ...[2024-12-10 21:40:58.999965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:51.335 passed 00:09:51.335 Test: blockdev copy ...passed 00:09:51.335 Suite: bdevio tests on: Nvme2n3 00:09:51.335 Test: blockdev write read block ...passed 00:09:51.335 Test: blockdev write zeroes read block ...passed 00:09:51.335 Test: blockdev write zeroes read no split ...passed 00:09:51.335 Test: blockdev write zeroes read split ...passed 00:09:51.593 Test: blockdev write zeroes read split partial ...passed 00:09:51.593 Test: blockdev reset ...[2024-12-10 21:40:59.082609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:51.593 [2024-12-10 21:40:59.087240] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:09:51.593 passed 00:09:51.593 Test: blockdev write read 8 blocks ...passed 00:09:51.593 Test: blockdev write read size > 128k ...passed 00:09:51.593 Test: blockdev write read invalid size ...passed 00:09:51.593 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:51.593 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:51.593 Test: blockdev write read max offset ...passed 00:09:51.593 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:51.593 Test: blockdev writev readv 8 blocks ...passed 00:09:51.593 Test: blockdev writev readv 30 x 1block ...passed 00:09:51.593 Test: blockdev writev readv block ...passed 00:09:51.593 Test: blockdev writev readv size > 128k ...passed 00:09:51.593 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:51.593 Test: blockdev comparev and writev ...[2024-12-10 21:40:59.098073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ae202000 len:0x1000 00:09:51.593 [2024-12-10 21:40:59.098325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:51.593 passed 00:09:51.593 Test: blockdev nvme passthru rw ...passed 00:09:51.593 Test: blockdev nvme passthru vendor specific ...[2024-12-10 21:40:59.099519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:51.593 passed 00:09:51.593 Test: blockdev nvme admin passthru ...[2024-12-10 21:40:59.099708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:51.593 passed 00:09:51.593 Test: blockdev copy ...passed 00:09:51.593 Suite: bdevio tests on: Nvme2n2 00:09:51.593 Test: blockdev write read block ...passed 00:09:51.593 Test: blockdev write zeroes read block ...passed 00:09:51.593 Test: blockdev write zeroes read no split ...passed 00:09:51.593 Test: blockdev write zeroes read split ...passed 00:09:51.593 Test: blockdev write zeroes read split partial ...passed 00:09:51.593 Test: blockdev reset ...[2024-12-10 21:40:59.177245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:51.593 passed 00:09:51.593 Test: blockdev write read 8 blocks ...[2024-12-10 21:40:59.181386] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:09:51.593 passed 00:09:51.593 Test: blockdev write read size > 128k ...passed 00:09:51.593 Test: blockdev write read invalid size ...passed 00:09:51.593 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:51.593 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:51.593 Test: blockdev write read max offset ...passed 00:09:51.593 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:51.593 Test: blockdev writev readv 8 blocks ...passed 00:09:51.593 Test: blockdev writev readv 30 x 1block ...passed 00:09:51.593 Test: blockdev writev readv block ...passed 00:09:51.593 Test: blockdev writev readv size > 128k ...passed 00:09:51.594 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:51.594 Test: blockdev comparev and writev ...[2024-12-10 21:40:59.190708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c2038000 len:0x1000 00:09:51.594 [2024-12-10 21:40:59.190933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:51.594 passed 00:09:51.594 Test: blockdev nvme passthru rw ...passed 00:09:51.594 Test: blockdev nvme passthru vendor specific ...[2024-12-10 21:40:59.191970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:09:51.594 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:09:51.594 [2024-12-10 21:40:59.192194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:51.594 passed 00:09:51.594 Test: blockdev copy ...passed 00:09:51.594 Suite: bdevio tests on: Nvme2n1 00:09:51.594 Test: blockdev write read block ...passed 00:09:51.594 Test: blockdev write zeroes read block ...passed 00:09:51.594 Test: blockdev write zeroes read no split ...passed 00:09:51.594 Test: blockdev write zeroes read split ...passed 00:09:51.594 Test: blockdev write zeroes read split partial ...passed 00:09:51.594 Test: blockdev reset ...[2024-12-10 21:40:59.271214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:51.594 [2024-12-10 21:40:59.275342] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:09:51.594 Test: blockdev write read 8 blocks ...uccessful. 00:09:51.594 passed 00:09:51.594 Test: blockdev write read size > 128k ...passed 00:09:51.594 Test: blockdev write read invalid size ...passed 00:09:51.594 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:51.594 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:51.594 Test: blockdev write read max offset ...passed 00:09:51.594 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:51.594 Test: blockdev writev readv 8 blocks ...passed 00:09:51.594 Test: blockdev writev readv 30 x 1block ...passed 00:09:51.594 Test: blockdev writev readv block ...passed 00:09:51.594 Test: blockdev writev readv size > 128k ...passed 00:09:51.594 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:51.594 Test: blockdev comparev and writev ...[2024-12-10 21:40:59.284483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c2034000 len:0x1000 00:09:51.594 passed 00:09:51.594 Test: blockdev nvme passthru rw ...[2024-12-10 21:40:59.284708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:51.594 passed 00:09:51.594 Test: blockdev nvme passthru vendor specific ...[2024-12-10 21:40:59.285586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:51.594 passed 00:09:51.594 Test: blockdev nvme admin passthru ...[2024-12-10 21:40:59.285824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:51.594 passed 00:09:51.594 Test: blockdev copy ...passed 00:09:51.594 Suite: bdevio tests on: Nvme1n1p2 00:09:51.594 Test: blockdev write read block ...passed 00:09:51.594 Test: blockdev write zeroes read block ...passed 00:09:51.594 Test: blockdev write zeroes read no split ...passed 00:09:51.853 Test: blockdev write zeroes read split ...passed 00:09:51.853 Test: blockdev write zeroes read split partial ...passed 00:09:51.853 Test: blockdev reset ...[2024-12-10 21:40:59.367215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:09:51.853 [2024-12-10 21:40:59.370907] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:09:51.853 passed 00:09:51.853 Test: blockdev write read 8 blocks ...passed 00:09:51.853 Test: blockdev write read size > 128k ...passed 00:09:51.853 Test: blockdev write read invalid size ...passed 00:09:51.853 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:51.853 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:51.853 Test: blockdev write read max offset ...passed 00:09:51.853 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:51.853 Test: blockdev writev readv 8 blocks ...passed 00:09:51.853 Test: blockdev writev readv 30 x 1block ...passed 00:09:51.853 Test: blockdev writev readv block ...passed 00:09:51.853 Test: blockdev writev readv size > 128k ...passed 00:09:51.853 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:51.853 Test: blockdev comparev and writev ...[2024-12-10 21:40:59.381420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2c2030000 len:0x1000 00:09:51.853 [2024-12-10 21:40:59.381656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:51.853 passed 00:09:51.853 Test: blockdev nvme passthru rw ...passed 00:09:51.853 Test: blockdev nvme passthru vendor specific ...passed 00:09:51.853 Test: blockdev nvme admin passthru ...passed 00:09:51.853 Test: blockdev copy ...passed 00:09:51.853 Suite: bdevio tests on: Nvme1n1p1 00:09:51.853 Test: blockdev write read block ...passed 00:09:51.853 Test: blockdev write zeroes read block ...passed 00:09:51.853 Test: blockdev write zeroes read no split ...passed 00:09:51.853 Test: blockdev write zeroes read split ...passed 00:09:51.853 Test: blockdev write zeroes read split partial ...passed 00:09:51.853 Test: blockdev reset ...[2024-12-10 21:40:59.453578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:09:51.853 [2024-12-10 21:40:59.457418] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:09:51.853 Test: blockdev write read 8 blocks ...uccessful. 00:09:51.853 passed 00:09:51.853 Test: blockdev write read size > 128k ...passed 00:09:51.853 Test: blockdev write read invalid size ...passed 00:09:51.853 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:51.853 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:51.853 Test: blockdev write read max offset ...passed 00:09:51.853 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:51.853 Test: blockdev writev readv 8 blocks ...passed 00:09:51.853 Test: blockdev writev readv 30 x 1block ...passed 00:09:51.853 Test: blockdev writev readv block ...passed 00:09:51.853 Test: blockdev writev readv size > 128k ...passed 00:09:51.853 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:51.853 Test: blockdev comparev and writev ...[2024-12-10 21:40:59.467140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2ae40e000 len:0x1000 00:09:51.853 passed 00:09:51.853 Test: blockdev nvme passthru rw ...passed 00:09:51.853 Test: blockdev nvme passthru vendor specific ...passed 00:09:51.853 Test: blockdev nvme admin passthru ...passed 00:09:51.853 Test: blockdev copy ...[2024-12-10 21:40:59.467355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:51.853 passed 00:09:51.853 Suite: bdevio tests on: Nvme0n1 00:09:51.853 Test: blockdev write read block ...passed 00:09:51.853 Test: blockdev write zeroes read block ...passed 00:09:51.853 Test: blockdev write zeroes read no split ...passed 00:09:51.853 Test: blockdev write zeroes read split ...passed 00:09:51.853 Test: blockdev write zeroes read split partial ...passed 00:09:51.853 Test: blockdev reset ...[2024-12-10 21:40:59.537608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:51.853 passed 00:09:51.853 Test: blockdev write read 8 blocks ...[2024-12-10 21:40:59.541384] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:09:51.853 passed 00:09:51.853 Test: blockdev write read size > 128k ...passed 00:09:51.853 Test: blockdev write read invalid size ...passed 00:09:51.853 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:51.853 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:51.853 Test: blockdev write read max offset ...passed 00:09:51.853 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:51.853 Test: blockdev writev readv 8 blocks ...passed 00:09:51.853 Test: blockdev writev readv 30 x 1block ...passed 00:09:51.853 Test: blockdev writev readv block ...passed 00:09:51.853 Test: blockdev writev readv size > 128k ...passed 00:09:51.853 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:51.853 Test: blockdev comparev and writev ...passed 00:09:51.853 Test: blockdev nvme passthru rw ...[2024-12-10 21:40:59.549168] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:09:51.853 separate metadata which is not supported yet. 00:09:51.853 passed 00:09:51.853 Test: blockdev nvme passthru vendor specific ...[2024-12-10 21:40:59.549922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:09:51.853 passed 00:09:51.853 Test: blockdev nvme admin passthru ...[2024-12-10 21:40:59.550137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:09:51.853 passed 00:09:51.853 Test: blockdev copy ...passed 00:09:51.853 00:09:51.853 Run Summary: Type Total Ran Passed Failed Inactive 00:09:51.853 suites 7 7 n/a 0 0 00:09:51.853 tests 161 161 161 0 0 00:09:51.853 asserts 1025 1025 1025 0 n/a 00:09:51.853 00:09:51.854 Elapsed time = 1.792 seconds 00:09:51.854 0 00:09:52.112 21:40:59 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 63976 00:09:52.112 21:40:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 63976 ']' 00:09:52.112 21:40:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 63976 00:09:52.112 21:40:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:09:52.112 21:40:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:52.112 21:40:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63976 00:09:52.112 21:40:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:52.112 21:40:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:52.112 21:40:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63976' 00:09:52.112 killing process with pid 63976 00:09:52.112 21:40:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 63976 00:09:52.112 21:40:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 63976 00:09:53.047 21:41:00 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:09:53.047 00:09:53.047 real 0m3.064s 00:09:53.047 user 0m7.812s 00:09:53.047 sys 0m0.489s 00:09:53.047 21:41:00 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.047 ************************************ 00:09:53.047 END TEST bdev_bounds 00:09:53.047 ************************************ 00:09:53.047 21:41:00 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:53.307 21:41:00 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:53.307 21:41:00 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:53.307 21:41:00 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.307 21:41:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:53.307 ************************************ 00:09:53.307 START TEST bdev_nbd 00:09:53.307 ************************************ 00:09:53.307 21:41:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:53.307 21:41:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:09:53.307 21:41:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:09:53.307 21:41:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:53.307 21:41:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:53.307 21:41:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:53.307 21:41:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:09:53.307 21:41:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:09:53.307 21:41:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:09:53.307 21:41:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:53.307 21:41:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:09:53.307 21:41:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:09:53.307 21:41:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:53.307 21:41:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:09:53.307 21:41:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:53.307 21:41:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:09:53.307 21:41:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=64041 00:09:53.307 21:41:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:53.307 21:41:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:53.307 21:41:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 64041 /var/tmp/spdk-nbd.sock 00:09:53.307 21:41:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 64041 ']' 00:09:53.307 21:41:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:53.307 21:41:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:53.307 21:41:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:53.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:53.307 21:41:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:53.307 21:41:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:53.307 [2024-12-10 21:41:00.911546] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:09:53.307 [2024-12-10 21:41:00.911871] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.566 [2024-12-10 21:41:01.097735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.566 [2024-12-10 21:41:01.217422] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.503 21:41:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.503 21:41:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:09:54.503 21:41:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:54.503 21:41:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:54.503 21:41:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:54.503 21:41:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:54.503 21:41:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:54.503 21:41:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:54.503 21:41:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:54.503 21:41:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:54.503 21:41:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:09:54.503 21:41:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:54.503 21:41:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:54.503 21:41:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:54.503 21:41:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:09:54.503 21:41:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:54.503 21:41:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:54.503 21:41:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:54.503 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:54.503 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:54.503 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:54.503 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:54.503 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:54.503 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:54.503 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:54.503 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:54.503 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:54.503 1+0 records in 00:09:54.503 1+0 records out 00:09:54.503 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402541 s, 10.2 MB/s 00:09:54.503 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:54.503 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:54.503 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:54.503 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:54.503 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:54.503 21:41:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:54.503 21:41:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:54.503 21:41:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:09:54.763 21:41:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:54.763 21:41:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:54.763 21:41:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:54.763 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:54.763 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:54.763 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:54.763 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:54.763 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:54.763 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:54.763 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:54.763 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:54.763 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:54.763 1+0 records in 00:09:54.763 1+0 records out 00:09:54.763 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000735181 s, 5.6 MB/s 00:09:54.763 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:54.763 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:54.763 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:54.763 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:54.763 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:54.763 21:41:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:54.763 21:41:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:55.023 21:41:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:09:55.023 21:41:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:55.023 21:41:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:55.023 21:41:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:55.023 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:09:55.023 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:55.023 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:55.023 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:55.023 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:09:55.023 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:55.023 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:55.023 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:55.023 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:55.023 1+0 records in 00:09:55.023 1+0 records out 00:09:55.023 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000703883 s, 5.8 MB/s 00:09:55.023 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:55.023 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:55.023 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:55.023 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:55.023 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:55.023 21:41:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:55.023 21:41:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:55.023 21:41:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:55.282 21:41:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:55.282 21:41:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:55.282 21:41:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:55.282 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:09:55.282 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:55.282 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:55.282 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:55.282 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:09:55.282 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:55.282 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:55.282 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:55.282 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:55.282 1+0 records in 00:09:55.282 1+0 records out 00:09:55.282 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000727719 s, 5.6 MB/s 00:09:55.282 21:41:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:55.282 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:55.282 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:55.282 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:55.282 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:55.282 21:41:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:55.282 21:41:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:55.543 21:41:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:55.543 21:41:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:55.543 21:41:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:55.543 21:41:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:55.543 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:09:55.543 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:55.543 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:55.543 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:55.543 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:09:55.543 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:55.543 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:55.543 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:55.543 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:55.543 1+0 records in 00:09:55.543 1+0 records out 00:09:55.543 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000736784 s, 5.6 MB/s 00:09:55.543 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:55.543 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:55.543 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:55.543 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:55.543 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:55.543 21:41:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:55.543 21:41:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:55.803 21:41:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:55.803 21:41:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:55.803 21:41:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:55.803 21:41:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:55.803 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:09:55.803 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:55.803 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:55.803 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:55.803 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:09:55.803 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:55.803 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:55.803 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:55.803 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:55.803 1+0 records in 00:09:55.803 1+0 records out 00:09:55.803 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000668961 s, 6.1 MB/s 00:09:55.803 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:55.803 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:55.803 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:55.803 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:55.803 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:55.803 21:41:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:55.803 21:41:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:56.063 21:41:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:56.063 21:41:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:09:56.063 21:41:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:09:56.063 21:41:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:09:56.063 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:09:56.063 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:56.063 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:56.063 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:56.063 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:09:56.063 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:56.063 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:56.063 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:56.063 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:56.063 1+0 records in 00:09:56.063 1+0 records out 00:09:56.063 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000808487 s, 5.1 MB/s 00:09:56.063 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:56.063 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:56.063 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:56.322 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:56.322 21:41:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:56.322 21:41:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:56.322 21:41:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:56.322 21:41:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:56.322 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:56.322 { 00:09:56.323 "nbd_device": "/dev/nbd0", 00:09:56.323 "bdev_name": "Nvme0n1" 00:09:56.323 }, 00:09:56.323 { 00:09:56.323 "nbd_device": "/dev/nbd1", 00:09:56.323 "bdev_name": "Nvme1n1p1" 00:09:56.323 }, 00:09:56.323 { 00:09:56.323 "nbd_device": "/dev/nbd2", 00:09:56.323 "bdev_name": "Nvme1n1p2" 00:09:56.323 }, 00:09:56.323 { 00:09:56.323 "nbd_device": "/dev/nbd3", 00:09:56.323 "bdev_name": "Nvme2n1" 00:09:56.323 }, 00:09:56.323 { 00:09:56.323 "nbd_device": "/dev/nbd4", 00:09:56.323 "bdev_name": "Nvme2n2" 00:09:56.323 }, 00:09:56.323 { 00:09:56.323 "nbd_device": "/dev/nbd5", 00:09:56.323 "bdev_name": "Nvme2n3" 00:09:56.323 }, 00:09:56.323 { 00:09:56.323 "nbd_device": "/dev/nbd6", 00:09:56.323 "bdev_name": "Nvme3n1" 00:09:56.323 } 00:09:56.323 ]' 00:09:56.323 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:56.323 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:56.323 { 00:09:56.323 "nbd_device": "/dev/nbd0", 00:09:56.323 "bdev_name": "Nvme0n1" 00:09:56.323 }, 00:09:56.323 { 00:09:56.323 "nbd_device": "/dev/nbd1", 00:09:56.323 "bdev_name": "Nvme1n1p1" 00:09:56.323 }, 00:09:56.323 { 00:09:56.323 "nbd_device": "/dev/nbd2", 00:09:56.323 "bdev_name": "Nvme1n1p2" 00:09:56.323 }, 00:09:56.323 { 00:09:56.323 "nbd_device": "/dev/nbd3", 00:09:56.323 "bdev_name": "Nvme2n1" 00:09:56.323 }, 00:09:56.323 { 00:09:56.323 "nbd_device": "/dev/nbd4", 00:09:56.323 "bdev_name": "Nvme2n2" 00:09:56.323 }, 00:09:56.323 { 00:09:56.323 "nbd_device": "/dev/nbd5", 00:09:56.323 "bdev_name": "Nvme2n3" 00:09:56.323 }, 00:09:56.323 { 00:09:56.323 "nbd_device": "/dev/nbd6", 00:09:56.323 "bdev_name": "Nvme3n1" 00:09:56.323 } 00:09:56.323 ]' 00:09:56.323 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:56.582 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:09:56.582 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:56.582 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:09:56.582 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:56.582 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:56.582 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:56.582 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:56.582 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:56.582 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:56.582 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:56.582 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:56.582 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:56.582 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:56.582 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:56.582 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:56.582 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:56.582 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:56.841 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:56.841 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:56.841 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:56.841 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:56.841 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:56.841 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:56.841 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:56.841 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:56.841 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:56.841 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:57.100 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:57.100 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:57.100 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:57.100 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:57.100 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:57.100 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:57.100 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:57.100 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:57.100 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:57.100 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:57.438 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:57.438 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:57.438 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:57.438 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:57.438 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:57.438 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:57.438 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:57.438 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:57.438 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:57.438 21:41:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:57.438 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:57.438 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:57.438 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:57.438 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:57.438 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:57.438 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:57.438 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:57.438 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:57.438 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:57.438 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:57.696 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:57.696 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:57.696 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:57.696 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:57.696 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:57.696 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:57.696 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:57.696 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:57.697 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:57.697 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:09:57.956 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:09:57.956 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:09:57.956 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:09:57.956 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:57.956 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:57.956 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:09:57.956 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:57.956 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:57.956 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:57.956 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:57.956 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:58.215 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:58.215 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:58.215 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:58.215 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:58.215 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:58.215 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:58.215 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:58.215 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:58.215 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:58.215 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:09:58.215 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:58.216 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:09:58.216 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:58.216 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:58.216 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:58.216 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:58.216 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:58.216 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:58.216 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:58.216 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:58.216 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:58.216 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:58.216 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:58.216 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:58.216 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:09:58.216 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:58.216 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:58.216 21:41:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:09:58.475 /dev/nbd0 00:09:58.475 21:41:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:58.475 21:41:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:58.475 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:58.475 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:58.475 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:58.475 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:58.475 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:58.475 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:58.475 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:58.475 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:58.475 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:58.475 1+0 records in 00:09:58.475 1+0 records out 00:09:58.475 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000511302 s, 8.0 MB/s 00:09:58.475 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:58.475 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:58.475 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:58.475 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:58.475 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:58.475 21:41:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:58.475 21:41:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:58.475 21:41:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:09:58.734 /dev/nbd1 00:09:58.734 21:41:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:58.734 21:41:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:58.734 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:58.734 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:58.734 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:58.734 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:58.734 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:58.734 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:58.734 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:58.734 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:58.734 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:58.734 1+0 records in 00:09:58.734 1+0 records out 00:09:58.734 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000744447 s, 5.5 MB/s 00:09:58.734 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:58.734 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:58.734 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:58.734 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:58.734 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:58.734 21:41:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:58.734 21:41:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:58.734 21:41:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:09:58.993 /dev/nbd10 00:09:58.993 21:41:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:58.993 21:41:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:58.993 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:09:58.993 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:58.993 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:58.993 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:58.993 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:09:58.993 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:58.993 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:58.993 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:58.993 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:58.993 1+0 records in 00:09:58.993 1+0 records out 00:09:58.993 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000827255 s, 5.0 MB/s 00:09:58.993 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:58.993 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:58.993 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:58.993 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:58.993 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:58.993 21:41:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:58.993 21:41:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:58.993 21:41:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:09:59.253 /dev/nbd11 00:09:59.253 21:41:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:59.253 21:41:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:59.253 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:09:59.253 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:59.253 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:59.253 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:59.253 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:09:59.253 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:59.253 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:59.253 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:59.253 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:59.253 1+0 records in 00:09:59.253 1+0 records out 00:09:59.253 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000737683 s, 5.6 MB/s 00:09:59.253 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:59.253 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:59.253 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:59.253 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:59.253 21:41:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:59.253 21:41:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:59.253 21:41:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:59.253 21:41:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:09:59.511 /dev/nbd12 00:09:59.511 21:41:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:59.511 21:41:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:59.511 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:09:59.511 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:59.511 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:59.511 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:59.511 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:09:59.511 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:59.511 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:59.511 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:59.511 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:59.511 1+0 records in 00:09:59.511 1+0 records out 00:09:59.511 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00067549 s, 6.1 MB/s 00:09:59.511 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:59.511 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:59.511 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:59.511 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:59.511 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:59.511 21:41:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:59.511 21:41:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:59.511 21:41:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:09:59.770 /dev/nbd13 00:09:59.770 21:41:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:59.770 21:41:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:59.770 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:09:59.770 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:59.770 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:59.770 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:59.770 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:09:59.770 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:59.770 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:59.770 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:59.770 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:59.770 1+0 records in 00:09:59.770 1+0 records out 00:09:59.771 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000830306 s, 4.9 MB/s 00:09:59.771 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:59.771 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:59.771 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:59.771 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:59.771 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:59.771 21:41:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:59.771 21:41:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:59.771 21:41:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:10:00.029 /dev/nbd14 00:10:00.029 21:41:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:10:00.029 21:41:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:10:00.029 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:10:00.029 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:00.029 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:00.029 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:00.029 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:10:00.029 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:00.029 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:00.029 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:00.029 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:00.029 1+0 records in 00:10:00.029 1+0 records out 00:10:00.029 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00133268 s, 3.1 MB/s 00:10:00.029 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:00.029 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:00.029 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:00.029 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:00.029 21:41:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:00.029 21:41:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:00.029 21:41:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:00.029 21:41:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:00.029 21:41:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:00.029 21:41:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:00.288 21:41:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:00.288 { 00:10:00.288 "nbd_device": "/dev/nbd0", 00:10:00.288 "bdev_name": "Nvme0n1" 00:10:00.289 }, 00:10:00.289 { 00:10:00.289 "nbd_device": "/dev/nbd1", 00:10:00.289 "bdev_name": "Nvme1n1p1" 00:10:00.289 }, 00:10:00.289 { 00:10:00.289 "nbd_device": "/dev/nbd10", 00:10:00.289 "bdev_name": "Nvme1n1p2" 00:10:00.289 }, 00:10:00.289 { 00:10:00.289 "nbd_device": "/dev/nbd11", 00:10:00.289 "bdev_name": "Nvme2n1" 00:10:00.289 }, 00:10:00.289 { 00:10:00.289 "nbd_device": "/dev/nbd12", 00:10:00.289 "bdev_name": "Nvme2n2" 00:10:00.289 }, 00:10:00.289 { 00:10:00.289 "nbd_device": "/dev/nbd13", 00:10:00.289 "bdev_name": "Nvme2n3" 00:10:00.289 }, 00:10:00.289 { 00:10:00.289 "nbd_device": "/dev/nbd14", 00:10:00.289 "bdev_name": "Nvme3n1" 00:10:00.289 } 00:10:00.289 ]' 00:10:00.289 21:41:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:00.289 { 00:10:00.289 "nbd_device": "/dev/nbd0", 00:10:00.289 "bdev_name": "Nvme0n1" 00:10:00.289 }, 00:10:00.289 { 00:10:00.289 "nbd_device": "/dev/nbd1", 00:10:00.289 "bdev_name": "Nvme1n1p1" 00:10:00.289 }, 00:10:00.289 { 00:10:00.289 "nbd_device": "/dev/nbd10", 00:10:00.289 "bdev_name": "Nvme1n1p2" 00:10:00.289 }, 00:10:00.289 { 00:10:00.289 "nbd_device": "/dev/nbd11", 00:10:00.289 "bdev_name": "Nvme2n1" 00:10:00.289 }, 00:10:00.289 { 00:10:00.289 "nbd_device": "/dev/nbd12", 00:10:00.289 "bdev_name": "Nvme2n2" 00:10:00.289 }, 00:10:00.289 { 00:10:00.289 "nbd_device": "/dev/nbd13", 00:10:00.289 "bdev_name": "Nvme2n3" 00:10:00.289 }, 00:10:00.289 { 00:10:00.289 "nbd_device": "/dev/nbd14", 00:10:00.289 "bdev_name": "Nvme3n1" 00:10:00.289 } 00:10:00.289 ]' 00:10:00.289 21:41:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:00.289 21:41:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:00.289 /dev/nbd1 00:10:00.289 /dev/nbd10 00:10:00.289 /dev/nbd11 00:10:00.289 /dev/nbd12 00:10:00.289 /dev/nbd13 00:10:00.289 /dev/nbd14' 00:10:00.289 21:41:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:00.289 /dev/nbd1 00:10:00.289 /dev/nbd10 00:10:00.289 /dev/nbd11 00:10:00.289 /dev/nbd12 00:10:00.289 /dev/nbd13 00:10:00.289 /dev/nbd14' 00:10:00.289 21:41:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:00.289 21:41:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:10:00.289 21:41:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:10:00.289 21:41:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:10:00.289 21:41:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:10:00.289 21:41:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:10:00.289 21:41:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:00.289 21:41:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:00.289 21:41:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:00.289 21:41:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:00.289 21:41:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:00.289 21:41:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:10:00.289 256+0 records in 00:10:00.289 256+0 records out 00:10:00.289 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0111313 s, 94.2 MB/s 00:10:00.289 21:41:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:00.289 21:41:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:00.548 256+0 records in 00:10:00.548 256+0 records out 00:10:00.548 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14202 s, 7.4 MB/s 00:10:00.548 21:41:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:00.548 21:41:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:00.807 256+0 records in 00:10:00.807 256+0 records out 00:10:00.807 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149893 s, 7.0 MB/s 00:10:00.807 21:41:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:00.807 21:41:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:10:00.807 256+0 records in 00:10:00.807 256+0 records out 00:10:00.807 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154692 s, 6.8 MB/s 00:10:00.807 21:41:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:00.807 21:41:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:10:01.065 256+0 records in 00:10:01.065 256+0 records out 00:10:01.065 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155576 s, 6.7 MB/s 00:10:01.065 21:41:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:01.065 21:41:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:10:01.065 256+0 records in 00:10:01.065 256+0 records out 00:10:01.065 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152275 s, 6.9 MB/s 00:10:01.065 21:41:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:01.065 21:41:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:10:01.323 256+0 records in 00:10:01.323 256+0 records out 00:10:01.323 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145804 s, 7.2 MB/s 00:10:01.323 21:41:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:01.323 21:41:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:10:01.582 256+0 records in 00:10:01.582 256+0 records out 00:10:01.582 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150685 s, 7.0 MB/s 00:10:01.582 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:10:01.582 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:01.582 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:01.582 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:01.582 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:01.582 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:01.582 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:01.582 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:01.582 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:10:01.582 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:01.582 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:10:01.582 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:01.582 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:10:01.582 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:01.582 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:10:01.582 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:01.582 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:10:01.582 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:01.582 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:10:01.582 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:01.582 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:10:01.582 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:01.582 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:01.582 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:01.583 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:01.583 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:01.583 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:01.583 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:01.583 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:01.842 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:01.842 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:01.842 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:01.842 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:01.842 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:01.842 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:01.842 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:01.842 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:01.842 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:01.842 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:02.101 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:02.101 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:02.101 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:02.101 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:02.101 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:02.101 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:02.101 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:02.101 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:02.101 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:02.101 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:10:02.360 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:10:02.360 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:10:02.360 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:10:02.360 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:02.360 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:02.360 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:10:02.360 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:02.360 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:02.360 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:02.360 21:41:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:10:02.619 21:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:10:02.619 21:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:10:02.619 21:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:10:02.619 21:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:02.619 21:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:02.619 21:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:10:02.619 21:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:02.619 21:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:02.619 21:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:02.619 21:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:10:02.878 21:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:10:02.879 21:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:10:02.879 21:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:10:02.879 21:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:02.879 21:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:02.879 21:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:10:02.879 21:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:02.879 21:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:02.879 21:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:02.879 21:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:10:03.137 21:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:10:03.137 21:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:10:03.137 21:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:10:03.137 21:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:03.137 21:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:03.137 21:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:10:03.137 21:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:03.137 21:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:03.137 21:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:03.137 21:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:10:03.137 21:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:10:03.395 21:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:10:03.395 21:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:10:03.395 21:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:03.395 21:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:03.395 21:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:10:03.395 21:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:03.395 21:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:03.395 21:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:03.395 21:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:03.395 21:41:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:03.395 21:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:03.395 21:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:03.395 21:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:03.653 21:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:03.653 21:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:03.653 21:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:03.653 21:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:03.653 21:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:03.653 21:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:03.653 21:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:10:03.653 21:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:03.653 21:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:10:03.653 21:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:03.653 21:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:03.653 21:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:10:03.653 21:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:10:03.653 malloc_lvol_verify 00:10:03.913 21:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:10:03.913 13bd9b76-31ec-495b-a81e-c1beb6ee9e6a 00:10:03.913 21:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:10:04.172 09fc5e62-f666-437e-8fae-1c6af280ea9c 00:10:04.172 21:41:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:10:04.431 /dev/nbd0 00:10:04.431 21:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:10:04.431 21:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:10:04.431 21:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:10:04.431 21:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:10:04.431 21:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:10:04.431 mke2fs 1.47.0 (5-Feb-2023) 00:10:04.431 Discarding device blocks: 0/4096 done 00:10:04.431 Creating filesystem with 4096 1k blocks and 1024 inodes 00:10:04.431 00:10:04.431 Allocating group tables: 0/1 done 00:10:04.431 Writing inode tables: 0/1 done 00:10:04.431 Creating journal (1024 blocks): done 00:10:04.431 Writing superblocks and filesystem accounting information: 0/1 done 00:10:04.431 00:10:04.431 21:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:04.431 21:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:04.431 21:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:04.431 21:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:04.431 21:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:04.431 21:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:04.431 21:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:04.690 21:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:04.690 21:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:04.690 21:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:04.690 21:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:04.690 21:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:04.690 21:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:04.690 21:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:04.690 21:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:04.690 21:41:12 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 64041 00:10:04.690 21:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 64041 ']' 00:10:04.690 21:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 64041 00:10:04.690 21:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:10:04.690 21:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:04.690 21:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64041 00:10:04.690 21:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:04.690 21:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:04.690 killing process with pid 64041 00:10:04.690 21:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64041' 00:10:04.690 21:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 64041 00:10:04.690 21:41:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 64041 00:10:06.066 21:41:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:10:06.066 00:10:06.066 real 0m12.769s 00:10:06.066 user 0m16.282s 00:10:06.066 sys 0m5.543s 00:10:06.066 21:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.066 ************************************ 00:10:06.066 21:41:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:06.066 END TEST bdev_nbd 00:10:06.066 ************************************ 00:10:06.066 21:41:13 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:10:06.066 21:41:13 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:10:06.066 21:41:13 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:10:06.066 skipping fio tests on NVMe due to multi-ns failures. 00:10:06.066 21:41:13 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:10:06.066 21:41:13 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:06.066 21:41:13 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:06.066 21:41:13 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:10:06.066 21:41:13 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.066 21:41:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:06.066 ************************************ 00:10:06.066 START TEST bdev_verify 00:10:06.066 ************************************ 00:10:06.066 21:41:13 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:06.066 [2024-12-10 21:41:13.747840] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:10:06.066 [2024-12-10 21:41:13.747974] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64469 ] 00:10:06.324 [2024-12-10 21:41:13.930086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:06.324 [2024-12-10 21:41:14.045508] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.324 [2024-12-10 21:41:14.045540] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.260 Running I/O for 5 seconds... 00:10:09.581 21568.00 IOPS, 84.25 MiB/s [2024-12-10T21:41:18.244Z] 21568.00 IOPS, 84.25 MiB/s [2024-12-10T21:41:19.184Z] 22592.00 IOPS, 88.25 MiB/s [2024-12-10T21:41:20.121Z] 22288.00 IOPS, 87.06 MiB/s [2024-12-10T21:41:20.121Z] 22694.40 IOPS, 88.65 MiB/s 00:10:12.390 Latency(us) 00:10:12.390 [2024-12-10T21:41:20.121Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:12.390 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:12.390 Verification LBA range: start 0x0 length 0xbd0bd 00:10:12.390 Nvme0n1 : 5.08 1625.13 6.35 0.00 0.00 78356.18 13475.68 88013.01 00:10:12.390 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:12.390 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:10:12.390 Nvme0n1 : 5.06 1567.76 6.12 0.00 0.00 81393.53 20108.23 91803.04 00:10:12.390 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:12.390 Verification LBA range: start 0x0 length 0x4ff80 00:10:12.390 Nvme1n1p1 : 5.10 1632.47 6.38 0.00 0.00 77910.52 12686.09 74958.44 00:10:12.390 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:12.390 Verification LBA range: start 0x4ff80 length 0x4ff80 00:10:12.390 Nvme1n1p1 : 5.06 1567.30 6.12 0.00 0.00 81319.18 22003.25 85907.43 00:10:12.390 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:12.390 Verification LBA range: start 0x0 length 0x4ff7f 00:10:12.390 Nvme1n1p2 : 5.10 1631.96 6.37 0.00 0.00 77816.75 10475.23 74116.22 00:10:12.390 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:12.390 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:10:12.390 Nvme1n1p2 : 5.06 1566.86 6.12 0.00 0.00 81027.61 19897.68 72852.87 00:10:12.390 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:12.390 Verification LBA range: start 0x0 length 0x80000 00:10:12.390 Nvme2n1 : 5.10 1630.64 6.37 0.00 0.00 77741.07 13791.51 77064.02 00:10:12.390 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:12.390 Verification LBA range: start 0x80000 length 0x80000 00:10:12.390 Nvme2n1 : 5.07 1566.43 6.12 0.00 0.00 80877.95 19371.28 72852.87 00:10:12.390 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:12.390 Verification LBA range: start 0x0 length 0x80000 00:10:12.390 Nvme2n2 : 5.10 1630.28 6.37 0.00 0.00 77632.23 13212.48 79169.59 00:10:12.390 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:12.390 Verification LBA range: start 0x80000 length 0x80000 00:10:12.390 Nvme2n2 : 5.08 1573.68 6.15 0.00 0.00 80388.02 5606.09 76642.90 00:10:12.390 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:12.390 Verification LBA range: start 0x0 length 0x80000 00:10:12.390 Nvme2n3 : 5.10 1629.93 6.37 0.00 0.00 77535.81 13317.76 82117.40 00:10:12.390 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:12.390 Verification LBA range: start 0x80000 length 0x80000 00:10:12.390 Nvme2n3 : 5.10 1581.74 6.18 0.00 0.00 79926.98 11791.22 80854.05 00:10:12.390 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:12.390 Verification LBA range: start 0x0 length 0x20000 00:10:12.390 Nvme3n1 : 5.11 1629.56 6.37 0.00 0.00 77440.80 13475.68 85907.43 00:10:12.390 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:12.390 Verification LBA range: start 0x20000 length 0x20000 00:10:12.390 Nvme3n1 : 5.10 1580.88 6.18 0.00 0.00 79808.68 12528.17 81275.17 00:10:12.390 [2024-12-10T21:41:20.121Z] =================================================================================================================== 00:10:12.390 [2024-12-10T21:41:20.121Z] Total : 22414.61 87.56 0.00 0.00 79195.20 5606.09 91803.04 00:10:13.767 00:10:13.767 real 0m7.811s 00:10:13.767 user 0m14.404s 00:10:13.767 sys 0m0.347s 00:10:13.767 21:41:21 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.767 21:41:21 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:10:13.767 ************************************ 00:10:13.767 END TEST bdev_verify 00:10:13.767 ************************************ 00:10:14.027 21:41:21 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:14.027 21:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:10:14.027 21:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.027 21:41:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:14.027 ************************************ 00:10:14.027 START TEST bdev_verify_big_io 00:10:14.027 ************************************ 00:10:14.027 21:41:21 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:14.027 [2024-12-10 21:41:21.624019] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:10:14.027 [2024-12-10 21:41:21.624156] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64572 ] 00:10:14.286 [2024-12-10 21:41:21.804813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:14.286 [2024-12-10 21:41:21.930206] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.286 [2024-12-10 21:41:21.930237] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.226 Running I/O for 5 seconds... 00:10:20.294 855.00 IOPS, 53.44 MiB/s [2024-12-10T21:41:28.962Z] 2834.00 IOPS, 177.12 MiB/s [2024-12-10T21:41:28.962Z] 3580.33 IOPS, 223.77 MiB/s 00:10:21.231 Latency(us) 00:10:21.231 [2024-12-10T21:41:28.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:21.231 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:21.231 Verification LBA range: start 0x0 length 0xbd0b 00:10:21.231 Nvme0n1 : 5.80 126.94 7.93 0.00 0.00 962154.19 14317.91 1044364.85 00:10:21.231 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:21.231 Verification LBA range: start 0xbd0b length 0xbd0b 00:10:21.231 Nvme0n1 : 5.58 129.08 8.07 0.00 0.00 944763.00 27372.47 1030889.18 00:10:21.232 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:21.232 Verification LBA range: start 0x0 length 0x4ff8 00:10:21.232 Nvme1n1p1 : 5.80 118.61 7.41 0.00 0.00 995357.00 63167.23 1489062.14 00:10:21.232 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:21.232 Verification LBA range: start 0x4ff8 length 0x4ff8 00:10:21.232 Nvme1n1p1 : 5.80 88.32 5.52 0.00 0.00 1352109.34 77064.02 1617081.06 00:10:21.232 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:21.232 Verification LBA range: start 0x0 length 0x4ff7 00:10:21.232 Nvme1n1p2 : 5.80 122.86 7.68 0.00 0.00 949780.56 81275.17 1509275.66 00:10:21.232 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:21.232 Verification LBA range: start 0x4ff7 length 0x4ff7 00:10:21.232 Nvme1n1p2 : 5.80 112.58 7.04 0.00 0.00 1050172.11 62325.00 1630556.74 00:10:21.232 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:21.232 Verification LBA range: start 0x0 length 0x8000 00:10:21.232 Nvme2n1 : 5.85 127.46 7.97 0.00 0.00 897379.40 45901.52 1529489.17 00:10:21.232 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:21.232 Verification LBA range: start 0x8000 length 0x8000 00:10:21.232 Nvme2n1 : 5.86 147.55 9.22 0.00 0.00 783403.14 28004.14 875918.91 00:10:21.232 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:21.232 Verification LBA range: start 0x0 length 0x8000 00:10:21.232 Nvme2n2 : 5.91 134.44 8.40 0.00 0.00 831715.57 13686.23 1542964.84 00:10:21.232 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:21.232 Verification LBA range: start 0x8000 length 0x8000 00:10:21.232 Nvme2n2 : 5.86 148.89 9.31 0.00 0.00 756515.12 28214.70 896132.42 00:10:21.232 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:21.232 Verification LBA range: start 0x0 length 0x8000 00:10:21.232 Nvme2n3 : 5.93 138.14 8.63 0.00 0.00 786858.90 37479.22 1569916.20 00:10:21.232 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:21.232 Verification LBA range: start 0x8000 length 0x8000 00:10:21.232 Nvme2n3 : 5.90 152.03 9.50 0.00 0.00 719794.92 30951.94 916345.93 00:10:21.232 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:21.232 Verification LBA range: start 0x0 length 0x2000 00:10:21.232 Nvme3n1 : 5.96 159.52 9.97 0.00 0.00 667429.94 7053.67 1590129.71 00:10:21.232 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:21.232 Verification LBA range: start 0x2000 length 0x2000 00:10:21.232 Nvme3n1 : 5.93 172.78 10.80 0.00 0.00 621657.68 4448.03 936559.45 00:10:21.232 [2024-12-10T21:41:28.963Z] =================================================================================================================== 00:10:21.232 [2024-12-10T21:41:28.963Z] Total : 1879.20 117.45 0.00 0.00 851398.58 4448.03 1630556.74 00:10:23.137 00:10:23.137 real 0m9.249s 00:10:23.137 user 0m17.249s 00:10:23.137 sys 0m0.378s 00:10:23.137 21:41:30 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.137 21:41:30 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:10:23.137 ************************************ 00:10:23.137 END TEST bdev_verify_big_io 00:10:23.137 ************************************ 00:10:23.137 21:41:30 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:23.137 21:41:30 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:23.137 21:41:30 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.137 21:41:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:23.137 ************************************ 00:10:23.137 START TEST bdev_write_zeroes 00:10:23.137 ************************************ 00:10:23.137 21:41:30 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:23.394 [2024-12-10 21:41:30.942610] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:10:23.395 [2024-12-10 21:41:30.942746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64688 ] 00:10:23.652 [2024-12-10 21:41:31.125591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.652 [2024-12-10 21:41:31.233558] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.218 Running I/O for 1 seconds... 00:10:25.590 77952.00 IOPS, 304.50 MiB/s 00:10:25.590 Latency(us) 00:10:25.590 [2024-12-10T21:41:33.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:25.590 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:25.590 Nvme0n1 : 1.02 11097.94 43.35 0.00 0.00 11509.60 6237.76 21687.42 00:10:25.590 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:25.590 Nvme1n1p1 : 1.02 11087.10 43.31 0.00 0.00 11506.81 10369.95 21582.14 00:10:25.590 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:25.590 Nvme1n1p2 : 1.02 11076.37 43.27 0.00 0.00 11489.49 10317.31 20845.19 00:10:25.590 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:25.590 Nvme2n1 : 1.02 11066.42 43.23 0.00 0.00 11456.80 10264.67 20318.79 00:10:25.590 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:25.590 Nvme2n2 : 1.02 11056.56 43.19 0.00 0.00 11443.15 9475.08 19792.40 00:10:25.590 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:25.590 Nvme2n3 : 1.03 11046.63 43.15 0.00 0.00 11417.64 7737.99 20213.51 00:10:25.590 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:25.590 Nvme3n1 : 1.03 11036.79 43.11 0.00 0.00 11411.62 7474.79 21897.97 00:10:25.590 [2024-12-10T21:41:33.321Z] =================================================================================================================== 00:10:25.590 [2024-12-10T21:41:33.321Z] Total : 77467.82 302.61 0.00 0.00 11462.16 6237.76 21897.97 00:10:26.544 00:10:26.544 real 0m3.295s 00:10:26.544 user 0m2.895s 00:10:26.544 sys 0m0.287s 00:10:26.544 ************************************ 00:10:26.544 END TEST bdev_write_zeroes 00:10:26.544 ************************************ 00:10:26.544 21:41:34 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:26.544 21:41:34 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:10:26.544 21:41:34 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:26.544 21:41:34 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:26.544 21:41:34 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:26.544 21:41:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:26.544 ************************************ 00:10:26.544 START TEST bdev_json_nonenclosed 00:10:26.544 ************************************ 00:10:26.544 21:41:34 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:26.824 [2024-12-10 21:41:34.318208] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:10:26.824 [2024-12-10 21:41:34.318332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64747 ] 00:10:26.824 [2024-12-10 21:41:34.505464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.083 [2024-12-10 21:41:34.626895] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.083 [2024-12-10 21:41:34.626994] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:10:27.083 [2024-12-10 21:41:34.627017] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:27.083 [2024-12-10 21:41:34.627030] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:27.342 00:10:27.342 real 0m0.657s 00:10:27.342 user 0m0.420s 00:10:27.342 sys 0m0.132s 00:10:27.342 21:41:34 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:27.342 21:41:34 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:10:27.342 ************************************ 00:10:27.342 END TEST bdev_json_nonenclosed 00:10:27.342 ************************************ 00:10:27.342 21:41:34 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:27.342 21:41:34 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:27.342 21:41:34 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:27.342 21:41:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:27.342 ************************************ 00:10:27.342 START TEST bdev_json_nonarray 00:10:27.342 ************************************ 00:10:27.342 21:41:34 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:27.342 [2024-12-10 21:41:35.047059] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:10:27.343 [2024-12-10 21:41:35.047187] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64772 ] 00:10:27.600 [2024-12-10 21:41:35.226311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.859 [2024-12-10 21:41:35.336931] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.859 [2024-12-10 21:41:35.337043] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:10:27.859 [2024-12-10 21:41:35.337078] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:27.859 [2024-12-10 21:41:35.337091] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:28.118 00:10:28.118 real 0m0.651s 00:10:28.118 user 0m0.386s 00:10:28.118 sys 0m0.160s 00:10:28.118 21:41:35 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.118 21:41:35 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:10:28.118 ************************************ 00:10:28.118 END TEST bdev_json_nonarray 00:10:28.118 ************************************ 00:10:28.118 21:41:35 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:10:28.118 21:41:35 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:10:28.118 21:41:35 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:10:28.118 21:41:35 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:28.118 21:41:35 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:28.118 21:41:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:28.118 ************************************ 00:10:28.118 START TEST bdev_gpt_uuid 00:10:28.118 ************************************ 00:10:28.118 21:41:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:10:28.118 21:41:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:10:28.118 21:41:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:10:28.118 21:41:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=64803 00:10:28.118 21:41:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:28.118 21:41:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:28.118 21:41:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 64803 00:10:28.118 21:41:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 64803 ']' 00:10:28.118 21:41:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.118 21:41:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:28.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.118 21:41:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.118 21:41:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:28.118 21:41:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:28.118 [2024-12-10 21:41:35.793262] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:10:28.118 [2024-12-10 21:41:35.793396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64803 ] 00:10:28.376 [2024-12-10 21:41:35.969887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.376 [2024-12-10 21:41:36.089810] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.313 21:41:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:29.313 21:41:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:10:29.313 21:41:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:29.313 21:41:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.313 21:41:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:29.571 Some configs were skipped because the RPC state that can call them passed over. 00:10:29.571 21:41:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.572 21:41:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:10:29.572 21:41:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.572 21:41:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:29.830 21:41:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.830 21:41:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:10:29.830 21:41:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.830 21:41:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:29.830 21:41:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.830 21:41:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:10:29.830 { 00:10:29.830 "name": "Nvme1n1p1", 00:10:29.830 "aliases": [ 00:10:29.830 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:10:29.830 ], 00:10:29.830 "product_name": "GPT Disk", 00:10:29.830 "block_size": 4096, 00:10:29.830 "num_blocks": 655104, 00:10:29.830 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:10:29.830 "assigned_rate_limits": { 00:10:29.830 "rw_ios_per_sec": 0, 00:10:29.830 "rw_mbytes_per_sec": 0, 00:10:29.830 "r_mbytes_per_sec": 0, 00:10:29.830 "w_mbytes_per_sec": 0 00:10:29.830 }, 00:10:29.830 "claimed": false, 00:10:29.830 "zoned": false, 00:10:29.830 "supported_io_types": { 00:10:29.830 "read": true, 00:10:29.830 "write": true, 00:10:29.830 "unmap": true, 00:10:29.830 "flush": true, 00:10:29.830 "reset": true, 00:10:29.830 "nvme_admin": false, 00:10:29.830 "nvme_io": false, 00:10:29.830 "nvme_io_md": false, 00:10:29.830 "write_zeroes": true, 00:10:29.830 "zcopy": false, 00:10:29.830 "get_zone_info": false, 00:10:29.830 "zone_management": false, 00:10:29.830 "zone_append": false, 00:10:29.830 "compare": true, 00:10:29.830 "compare_and_write": false, 00:10:29.830 "abort": true, 00:10:29.830 "seek_hole": false, 00:10:29.830 "seek_data": false, 00:10:29.830 "copy": true, 00:10:29.830 "nvme_iov_md": false 00:10:29.830 }, 00:10:29.830 "driver_specific": { 00:10:29.830 "gpt": { 00:10:29.830 "base_bdev": "Nvme1n1", 00:10:29.830 "offset_blocks": 256, 00:10:29.830 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:10:29.830 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:10:29.830 "partition_name": "SPDK_TEST_first" 00:10:29.830 } 00:10:29.830 } 00:10:29.830 } 00:10:29.830 ]' 00:10:29.830 21:41:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:10:29.830 21:41:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:10:29.830 21:41:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:10:29.830 21:41:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:10:29.830 21:41:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:10:29.830 21:41:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:10:29.830 21:41:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:10:29.830 21:41:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.830 21:41:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:29.831 21:41:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.831 21:41:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:10:29.831 { 00:10:29.831 "name": "Nvme1n1p2", 00:10:29.831 "aliases": [ 00:10:29.831 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:10:29.831 ], 00:10:29.831 "product_name": "GPT Disk", 00:10:29.831 "block_size": 4096, 00:10:29.831 "num_blocks": 655103, 00:10:29.831 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:10:29.831 "assigned_rate_limits": { 00:10:29.831 "rw_ios_per_sec": 0, 00:10:29.831 "rw_mbytes_per_sec": 0, 00:10:29.831 "r_mbytes_per_sec": 0, 00:10:29.831 "w_mbytes_per_sec": 0 00:10:29.831 }, 00:10:29.831 "claimed": false, 00:10:29.831 "zoned": false, 00:10:29.831 "supported_io_types": { 00:10:29.831 "read": true, 00:10:29.831 "write": true, 00:10:29.831 "unmap": true, 00:10:29.831 "flush": true, 00:10:29.831 "reset": true, 00:10:29.831 "nvme_admin": false, 00:10:29.831 "nvme_io": false, 00:10:29.831 "nvme_io_md": false, 00:10:29.831 "write_zeroes": true, 00:10:29.831 "zcopy": false, 00:10:29.831 "get_zone_info": false, 00:10:29.831 "zone_management": false, 00:10:29.831 "zone_append": false, 00:10:29.831 "compare": true, 00:10:29.831 "compare_and_write": false, 00:10:29.831 "abort": true, 00:10:29.831 "seek_hole": false, 00:10:29.831 "seek_data": false, 00:10:29.831 "copy": true, 00:10:29.831 "nvme_iov_md": false 00:10:29.831 }, 00:10:29.831 "driver_specific": { 00:10:29.831 "gpt": { 00:10:29.831 "base_bdev": "Nvme1n1", 00:10:29.831 "offset_blocks": 655360, 00:10:29.831 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:10:29.831 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:10:29.831 "partition_name": "SPDK_TEST_second" 00:10:29.831 } 00:10:29.831 } 00:10:29.831 } 00:10:29.831 ]' 00:10:29.831 21:41:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:10:29.831 21:41:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:10:29.831 21:41:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:10:30.090 21:41:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:10:30.090 21:41:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:10:30.090 21:41:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:10:30.090 21:41:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 64803 00:10:30.090 21:41:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 64803 ']' 00:10:30.090 21:41:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 64803 00:10:30.090 21:41:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:10:30.090 21:41:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:30.090 21:41:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64803 00:10:30.090 killing process with pid 64803 00:10:30.090 21:41:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:30.090 21:41:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:30.090 21:41:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64803' 00:10:30.090 21:41:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 64803 00:10:30.090 21:41:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 64803 00:10:32.632 ************************************ 00:10:32.632 END TEST bdev_gpt_uuid 00:10:32.632 ************************************ 00:10:32.632 00:10:32.632 real 0m4.408s 00:10:32.632 user 0m4.509s 00:10:32.632 sys 0m0.575s 00:10:32.632 21:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.632 21:41:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:32.632 21:41:40 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:10:32.632 21:41:40 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:10:32.632 21:41:40 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:10:32.632 21:41:40 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:10:32.632 21:41:40 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:32.632 21:41:40 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:10:32.632 21:41:40 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:10:32.632 21:41:40 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:10:32.632 21:41:40 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:33.201 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:33.460 Waiting for block devices as requested 00:10:33.460 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:33.460 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:33.718 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:33.718 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:38.989 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:38.989 21:41:46 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:10:38.989 21:41:46 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:10:39.247 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:10:39.247 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:10:39.247 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:10:39.247 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:10:39.247 21:41:46 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:10:39.247 00:10:39.247 real 1m6.125s 00:10:39.247 user 1m21.914s 00:10:39.247 sys 0m12.592s 00:10:39.247 21:41:46 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:39.247 ************************************ 00:10:39.247 21:41:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:39.247 END TEST blockdev_nvme_gpt 00:10:39.247 ************************************ 00:10:39.247 21:41:46 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:10:39.247 21:41:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:39.247 21:41:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:39.247 21:41:46 -- common/autotest_common.sh@10 -- # set +x 00:10:39.247 ************************************ 00:10:39.247 START TEST nvme 00:10:39.247 ************************************ 00:10:39.247 21:41:46 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:10:39.247 * Looking for test storage... 00:10:39.247 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:39.247 21:41:46 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:39.247 21:41:46 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:39.247 21:41:46 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:10:39.506 21:41:47 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:39.506 21:41:47 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:39.506 21:41:47 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:39.506 21:41:47 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:39.506 21:41:47 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:10:39.506 21:41:47 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:10:39.506 21:41:47 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:10:39.506 21:41:47 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:10:39.506 21:41:47 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:10:39.506 21:41:47 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:10:39.506 21:41:47 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:10:39.506 21:41:47 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:39.506 21:41:47 nvme -- scripts/common.sh@344 -- # case "$op" in 00:10:39.506 21:41:47 nvme -- scripts/common.sh@345 -- # : 1 00:10:39.506 21:41:47 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:39.506 21:41:47 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:39.506 21:41:47 nvme -- scripts/common.sh@365 -- # decimal 1 00:10:39.506 21:41:47 nvme -- scripts/common.sh@353 -- # local d=1 00:10:39.506 21:41:47 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:39.506 21:41:47 nvme -- scripts/common.sh@355 -- # echo 1 00:10:39.506 21:41:47 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:10:39.506 21:41:47 nvme -- scripts/common.sh@366 -- # decimal 2 00:10:39.506 21:41:47 nvme -- scripts/common.sh@353 -- # local d=2 00:10:39.506 21:41:47 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:39.506 21:41:47 nvme -- scripts/common.sh@355 -- # echo 2 00:10:39.506 21:41:47 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:10:39.506 21:41:47 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:39.506 21:41:47 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:39.506 21:41:47 nvme -- scripts/common.sh@368 -- # return 0 00:10:39.506 21:41:47 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:39.506 21:41:47 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:39.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.506 --rc genhtml_branch_coverage=1 00:10:39.506 --rc genhtml_function_coverage=1 00:10:39.506 --rc genhtml_legend=1 00:10:39.506 --rc geninfo_all_blocks=1 00:10:39.506 --rc geninfo_unexecuted_blocks=1 00:10:39.506 00:10:39.506 ' 00:10:39.506 21:41:47 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:39.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.506 --rc genhtml_branch_coverage=1 00:10:39.506 --rc genhtml_function_coverage=1 00:10:39.506 --rc genhtml_legend=1 00:10:39.506 --rc geninfo_all_blocks=1 00:10:39.506 --rc geninfo_unexecuted_blocks=1 00:10:39.506 00:10:39.506 ' 00:10:39.506 21:41:47 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:39.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.506 --rc genhtml_branch_coverage=1 00:10:39.506 --rc genhtml_function_coverage=1 00:10:39.506 --rc genhtml_legend=1 00:10:39.506 --rc geninfo_all_blocks=1 00:10:39.506 --rc geninfo_unexecuted_blocks=1 00:10:39.506 00:10:39.506 ' 00:10:39.506 21:41:47 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:39.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.506 --rc genhtml_branch_coverage=1 00:10:39.506 --rc genhtml_function_coverage=1 00:10:39.506 --rc genhtml_legend=1 00:10:39.506 --rc geninfo_all_blocks=1 00:10:39.506 --rc geninfo_unexecuted_blocks=1 00:10:39.506 00:10:39.506 ' 00:10:39.506 21:41:47 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:40.073 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:41.010 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:41.010 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:41.010 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:41.010 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:41.010 21:41:48 nvme -- nvme/nvme.sh@79 -- # uname 00:10:41.010 21:41:48 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:10:41.010 21:41:48 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:10:41.010 21:41:48 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:10:41.010 21:41:48 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:10:41.010 21:41:48 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:10:41.010 21:41:48 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:10:41.010 21:41:48 nvme -- common/autotest_common.sh@1075 -- # stubpid=65469 00:10:41.010 21:41:48 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:10:41.010 Waiting for stub to ready for secondary processes... 00:10:41.010 21:41:48 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:41.010 21:41:48 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/65469 ]] 00:10:41.010 21:41:48 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:10:41.010 21:41:48 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:10:41.010 [2024-12-10 21:41:48.734185] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:10:41.010 [2024-12-10 21:41:48.734316] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:10:42.389 21:41:49 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:42.389 21:41:49 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/65469 ]] 00:10:42.389 21:41:49 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:10:42.389 [2024-12-10 21:41:49.921308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:42.389 [2024-12-10 21:41:50.028345] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:42.389 [2024-12-10 21:41:50.028491] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.389 [2024-12-10 21:41:50.028537] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:42.389 [2024-12-10 21:41:50.046709] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:10:42.389 [2024-12-10 21:41:50.046902] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:42.389 [2024-12-10 21:41:50.063581] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:10:42.389 [2024-12-10 21:41:50.063982] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:10:42.389 [2024-12-10 21:41:50.067677] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:42.389 [2024-12-10 21:41:50.068101] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:10:42.389 [2024-12-10 21:41:50.068365] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:10:42.389 [2024-12-10 21:41:50.072222] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:42.389 [2024-12-10 21:41:50.072604] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:10:42.389 [2024-12-10 21:41:50.072864] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:10:42.389 [2024-12-10 21:41:50.077171] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:42.389 [2024-12-10 21:41:50.077447] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:10:42.389 [2024-12-10 21:41:50.077621] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:10:42.389 [2024-12-10 21:41:50.077718] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:10:42.389 [2024-12-10 21:41:50.077826] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:10:42.957 21:41:50 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:42.957 21:41:50 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:10:43.215 done. 00:10:43.215 21:41:50 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:43.215 21:41:50 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:10:43.216 21:41:50 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.216 21:41:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:43.216 ************************************ 00:10:43.216 START TEST nvme_reset 00:10:43.216 ************************************ 00:10:43.216 21:41:50 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:43.474 Initializing NVMe Controllers 00:10:43.474 Skipping QEMU NVMe SSD at 0000:00:10.0 00:10:43.474 Skipping QEMU NVMe SSD at 0000:00:11.0 00:10:43.474 Skipping QEMU NVMe SSD at 0000:00:13.0 00:10:43.474 Skipping QEMU NVMe SSD at 0000:00:12.0 00:10:43.474 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:10:43.474 ************************************ 00:10:43.474 END TEST nvme_reset 00:10:43.474 ************************************ 00:10:43.474 00:10:43.474 real 0m0.289s 00:10:43.474 user 0m0.107s 00:10:43.474 sys 0m0.142s 00:10:43.474 21:41:50 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.474 21:41:50 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:10:43.474 21:41:51 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:10:43.474 21:41:51 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:43.474 21:41:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.474 21:41:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:43.474 ************************************ 00:10:43.474 START TEST nvme_identify 00:10:43.474 ************************************ 00:10:43.474 21:41:51 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:10:43.474 21:41:51 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:10:43.474 21:41:51 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:10:43.474 21:41:51 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:10:43.474 21:41:51 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:10:43.474 21:41:51 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:43.474 21:41:51 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:10:43.474 21:41:51 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:43.474 21:41:51 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:43.474 21:41:51 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:43.474 21:41:51 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:43.474 21:41:51 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:43.474 21:41:51 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:10:43.736 [2024-12-10 21:41:51.413550] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 65502 terminated unexpected 00:10:43.736 ===================================================== 00:10:43.736 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:43.736 ===================================================== 00:10:43.736 Controller Capabilities/Features 00:10:43.736 ================================ 00:10:43.736 Vendor ID: 1b36 00:10:43.736 Subsystem Vendor ID: 1af4 00:10:43.736 Serial Number: 12340 00:10:43.736 Model Number: QEMU NVMe Ctrl 00:10:43.736 Firmware Version: 8.0.0 00:10:43.736 Recommended Arb Burst: 6 00:10:43.736 IEEE OUI Identifier: 00 54 52 00:10:43.736 Multi-path I/O 00:10:43.736 May have multiple subsystem ports: No 00:10:43.736 May have multiple controllers: No 00:10:43.736 Associated with SR-IOV VF: No 00:10:43.736 Max Data Transfer Size: 524288 00:10:43.736 Max Number of Namespaces: 256 00:10:43.736 Max Number of I/O Queues: 64 00:10:43.736 NVMe Specification Version (VS): 1.4 00:10:43.736 NVMe Specification Version (Identify): 1.4 00:10:43.736 Maximum Queue Entries: 2048 00:10:43.736 Contiguous Queues Required: Yes 00:10:43.736 Arbitration Mechanisms Supported 00:10:43.736 Weighted Round Robin: Not Supported 00:10:43.736 Vendor Specific: Not Supported 00:10:43.736 Reset Timeout: 7500 ms 00:10:43.736 Doorbell Stride: 4 bytes 00:10:43.736 NVM Subsystem Reset: Not Supported 00:10:43.736 Command Sets Supported 00:10:43.736 NVM Command Set: Supported 00:10:43.736 Boot Partition: Not Supported 00:10:43.736 Memory Page Size Minimum: 4096 bytes 00:10:43.736 Memory Page Size Maximum: 65536 bytes 00:10:43.736 Persistent Memory Region: Not Supported 00:10:43.736 Optional Asynchronous Events Supported 00:10:43.736 Namespace Attribute Notices: Supported 00:10:43.736 Firmware Activation Notices: Not Supported 00:10:43.736 ANA Change Notices: Not Supported 00:10:43.736 PLE Aggregate Log Change Notices: Not Supported 00:10:43.736 LBA Status Info Alert Notices: Not Supported 00:10:43.736 EGE Aggregate Log Change Notices: Not Supported 00:10:43.736 Normal NVM Subsystem Shutdown event: Not Supported 00:10:43.736 Zone Descriptor Change Notices: Not Supported 00:10:43.736 Discovery Log Change Notices: Not Supported 00:10:43.736 Controller Attributes 00:10:43.736 128-bit Host Identifier: Not Supported 00:10:43.736 Non-Operational Permissive Mode: Not Supported 00:10:43.736 NVM Sets: Not Supported 00:10:43.736 Read Recovery Levels: Not Supported 00:10:43.736 Endurance Groups: Not Supported 00:10:43.736 Predictable Latency Mode: Not Supported 00:10:43.736 Traffic Based Keep ALive: Not Supported 00:10:43.736 Namespace Granularity: Not Supported 00:10:43.736 SQ Associations: Not Supported 00:10:43.736 UUID List: Not Supported 00:10:43.736 Multi-Domain Subsystem: Not Supported 00:10:43.736 Fixed Capacity Management: Not Supported 00:10:43.736 Variable Capacity Management: Not Supported 00:10:43.736 Delete Endurance Group: Not Supported 00:10:43.736 Delete NVM Set: Not Supported 00:10:43.736 Extended LBA Formats Supported: Supported 00:10:43.736 Flexible Data Placement Supported: Not Supported 00:10:43.736 00:10:43.736 Controller Memory Buffer Support 00:10:43.737 ================================ 00:10:43.737 Supported: No 00:10:43.737 00:10:43.737 Persistent Memory Region Support 00:10:43.737 ================================ 00:10:43.737 Supported: No 00:10:43.737 00:10:43.737 Admin Command Set Attributes 00:10:43.737 ============================ 00:10:43.737 Security Send/Receive: Not Supported 00:10:43.737 Format NVM: Supported 00:10:43.737 Firmware Activate/Download: Not Supported 00:10:43.737 Namespace Management: Supported 00:10:43.737 Device Self-Test: Not Supported 00:10:43.737 Directives: Supported 00:10:43.737 NVMe-MI: Not Supported 00:10:43.737 Virtualization Management: Not Supported 00:10:43.737 Doorbell Buffer Config: Supported 00:10:43.737 Get LBA Status Capability: Not Supported 00:10:43.737 Command & Feature Lockdown Capability: Not Supported 00:10:43.737 Abort Command Limit: 4 00:10:43.737 Async Event Request Limit: 4 00:10:43.737 Number of Firmware Slots: N/A 00:10:43.737 Firmware Slot 1 Read-Only: N/A 00:10:43.737 Firmware Activation Without Reset: N/A 00:10:43.737 Multiple Update Detection Support: N/A 00:10:43.737 Firmware Update Granularity: No Information Provided 00:10:43.737 Per-Namespace SMART Log: Yes 00:10:43.737 Asymmetric Namespace Access Log Page: Not Supported 00:10:43.737 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:43.737 Command Effects Log Page: Supported 00:10:43.737 Get Log Page Extended Data: Supported 00:10:43.737 Telemetry Log Pages: Not Supported 00:10:43.737 Persistent Event Log Pages: Not Supported 00:10:43.737 Supported Log Pages Log Page: May Support 00:10:43.737 Commands Supported & Effects Log Page: Not Supported 00:10:43.737 Feature Identifiers & Effects Log Page:May Support 00:10:43.737 NVMe-MI Commands & Effects Log Page: May Support 00:10:43.737 Data Area 4 for Telemetry Log: Not Supported 00:10:43.737 Error Log Page Entries Supported: 1 00:10:43.737 Keep Alive: Not Supported 00:10:43.737 00:10:43.737 NVM Command Set Attributes 00:10:43.737 ========================== 00:10:43.737 Submission Queue Entry Size 00:10:43.737 Max: 64 00:10:43.737 Min: 64 00:10:43.737 Completion Queue Entry Size 00:10:43.737 Max: 16 00:10:43.737 Min: 16 00:10:43.737 Number of Namespaces: 256 00:10:43.737 Compare Command: Supported 00:10:43.737 Write Uncorrectable Command: Not Supported 00:10:43.737 Dataset Management Command: Supported 00:10:43.737 Write Zeroes Command: Supported 00:10:43.737 Set Features Save Field: Supported 00:10:43.737 Reservations: Not Supported 00:10:43.737 Timestamp: Supported 00:10:43.737 Copy: Supported 00:10:43.737 Volatile Write Cache: Present 00:10:43.737 Atomic Write Unit (Normal): 1 00:10:43.737 Atomic Write Unit (PFail): 1 00:10:43.737 Atomic Compare & Write Unit: 1 00:10:43.737 Fused Compare & Write: Not Supported 00:10:43.737 Scatter-Gather List 00:10:43.737 SGL Command Set: Supported 00:10:43.737 SGL Keyed: Not Supported 00:10:43.737 SGL Bit Bucket Descriptor: Not Supported 00:10:43.737 SGL Metadata Pointer: Not Supported 00:10:43.737 Oversized SGL: Not Supported 00:10:43.737 SGL Metadata Address: Not Supported 00:10:43.737 SGL Offset: Not Supported 00:10:43.737 Transport SGL Data Block: Not Supported 00:10:43.737 Replay Protected Memory Block: Not Supported 00:10:43.737 00:10:43.737 Firmware Slot Information 00:10:43.737 ========================= 00:10:43.737 Active slot: 1 00:10:43.737 Slot 1 Firmware Revision: 1.0 00:10:43.737 00:10:43.737 00:10:43.737 Commands Supported and Effects 00:10:43.737 ============================== 00:10:43.737 Admin Commands 00:10:43.737 -------------- 00:10:43.737 Delete I/O Submission Queue (00h): Supported 00:10:43.737 Create I/O Submission Queue (01h): Supported 00:10:43.737 Get Log Page (02h): Supported 00:10:43.737 Delete I/O Completion Queue (04h): Supported 00:10:43.737 Create I/O Completion Queue (05h): Supported 00:10:43.737 Identify (06h): Supported 00:10:43.737 Abort (08h): Supported 00:10:43.737 Set Features (09h): Supported 00:10:43.737 Get Features (0Ah): Supported 00:10:43.737 Asynchronous Event Request (0Ch): Supported 00:10:43.737 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:43.737 Directive Send (19h): Supported 00:10:43.737 Directive Receive (1Ah): Supported 00:10:43.737 Virtualization Management (1Ch): Supported 00:10:43.737 Doorbell Buffer Config (7Ch): Supported 00:10:43.737 Format NVM (80h): Supported LBA-Change 00:10:43.737 I/O Commands 00:10:43.737 ------------ 00:10:43.737 Flush (00h): Supported LBA-Change 00:10:43.737 Write (01h): Supported LBA-Change 00:10:43.737 Read (02h): Supported 00:10:43.737 Compare (05h): Supported 00:10:43.737 Write Zeroes (08h): Supported LBA-Change 00:10:43.737 Dataset Management (09h): Supported LBA-Change 00:10:43.737 Unknown (0Ch): Supported 00:10:43.737 Unknown (12h): Supported 00:10:43.737 Copy (19h): Supported LBA-Change 00:10:43.737 Unknown (1Dh): Supported LBA-Change 00:10:43.737 00:10:43.737 Error Log 00:10:43.737 ========= 00:10:43.737 00:10:43.737 Arbitration 00:10:43.737 =========== 00:10:43.737 Arbitration Burst: no limit 00:10:43.737 00:10:43.737 Power Management 00:10:43.737 ================ 00:10:43.737 Number of Power States: 1 00:10:43.737 Current Power State: Power State #0 00:10:43.737 Power State #0: 00:10:43.737 Max Power: 25.00 W 00:10:43.737 Non-Operational State: Operational 00:10:43.737 Entry Latency: 16 microseconds 00:10:43.737 Exit Latency: 4 microseconds 00:10:43.737 Relative Read Throughput: 0 00:10:43.737 Relative Read Latency: 0 00:10:43.737 Relative Write Throughput: 0 00:10:43.737 Relative Write Latency: 0 00:10:43.737 Idle Power[2024-12-10 21:41:51.414818] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 65502 terminated unexpected 00:10:43.737 : Not Reported 00:10:43.737 Active Power: Not Reported 00:10:43.737 Non-Operational Permissive Mode: Not Supported 00:10:43.737 00:10:43.737 Health Information 00:10:43.737 ================== 00:10:43.737 Critical Warnings: 00:10:43.737 Available Spare Space: OK 00:10:43.737 Temperature: OK 00:10:43.737 Device Reliability: OK 00:10:43.737 Read Only: No 00:10:43.737 Volatile Memory Backup: OK 00:10:43.737 Current Temperature: 323 Kelvin (50 Celsius) 00:10:43.737 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:43.737 Available Spare: 0% 00:10:43.737 Available Spare Threshold: 0% 00:10:43.737 Life Percentage Used: 0% 00:10:43.737 Data Units Read: 801 00:10:43.737 Data Units Written: 729 00:10:43.737 Host Read Commands: 39635 00:10:43.737 Host Write Commands: 39421 00:10:43.737 Controller Busy Time: 0 minutes 00:10:43.737 Power Cycles: 0 00:10:43.737 Power On Hours: 0 hours 00:10:43.737 Unsafe Shutdowns: 0 00:10:43.737 Unrecoverable Media Errors: 0 00:10:43.737 Lifetime Error Log Entries: 0 00:10:43.737 Warning Temperature Time: 0 minutes 00:10:43.737 Critical Temperature Time: 0 minutes 00:10:43.737 00:10:43.737 Number of Queues 00:10:43.737 ================ 00:10:43.737 Number of I/O Submission Queues: 64 00:10:43.737 Number of I/O Completion Queues: 64 00:10:43.737 00:10:43.737 ZNS Specific Controller Data 00:10:43.737 ============================ 00:10:43.737 Zone Append Size Limit: 0 00:10:43.737 00:10:43.737 00:10:43.737 Active Namespaces 00:10:43.737 ================= 00:10:43.737 Namespace ID:1 00:10:43.737 Error Recovery Timeout: Unlimited 00:10:43.737 Command Set Identifier: NVM (00h) 00:10:43.737 Deallocate: Supported 00:10:43.737 Deallocated/Unwritten Error: Supported 00:10:43.737 Deallocated Read Value: All 0x00 00:10:43.737 Deallocate in Write Zeroes: Not Supported 00:10:43.737 Deallocated Guard Field: 0xFFFF 00:10:43.737 Flush: Supported 00:10:43.737 Reservation: Not Supported 00:10:43.737 Metadata Transferred as: Separate Metadata Buffer 00:10:43.737 Namespace Sharing Capabilities: Private 00:10:43.737 Size (in LBAs): 1548666 (5GiB) 00:10:43.737 Capacity (in LBAs): 1548666 (5GiB) 00:10:43.737 Utilization (in LBAs): 1548666 (5GiB) 00:10:43.737 Thin Provisioning: Not Supported 00:10:43.737 Per-NS Atomic Units: No 00:10:43.737 Maximum Single Source Range Length: 128 00:10:43.737 Maximum Copy Length: 128 00:10:43.737 Maximum Source Range Count: 128 00:10:43.737 NGUID/EUI64 Never Reused: No 00:10:43.737 Namespace Write Protected: No 00:10:43.737 Number of LBA Formats: 8 00:10:43.737 Current LBA Format: LBA Format #07 00:10:43.737 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:43.737 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:43.737 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:43.737 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:43.737 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:43.737 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:43.737 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:43.737 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:43.737 00:10:43.738 NVM Specific Namespace Data 00:10:43.738 =========================== 00:10:43.738 Logical Block Storage Tag Mask: 0 00:10:43.738 Protection Information Capabilities: 00:10:43.738 16b Guard Protection Information Storage Tag Support: No 00:10:43.738 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:43.738 Storage Tag Check Read Support: No 00:10:43.738 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.738 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.738 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.738 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.738 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.738 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.738 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.738 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.738 ===================================================== 00:10:43.738 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:43.738 ===================================================== 00:10:43.738 Controller Capabilities/Features 00:10:43.738 ================================ 00:10:43.738 Vendor ID: 1b36 00:10:43.738 Subsystem Vendor ID: 1af4 00:10:43.738 Serial Number: 12341 00:10:43.738 Model Number: QEMU NVMe Ctrl 00:10:43.738 Firmware Version: 8.0.0 00:10:43.738 Recommended Arb Burst: 6 00:10:43.738 IEEE OUI Identifier: 00 54 52 00:10:43.738 Multi-path I/O 00:10:43.738 May have multiple subsystem ports: No 00:10:43.738 May have multiple controllers: No 00:10:43.738 Associated with SR-IOV VF: No 00:10:43.738 Max Data Transfer Size: 524288 00:10:43.738 Max Number of Namespaces: 256 00:10:43.738 Max Number of I/O Queues: 64 00:10:43.738 NVMe Specification Version (VS): 1.4 00:10:43.738 NVMe Specification Version (Identify): 1.4 00:10:43.738 Maximum Queue Entries: 2048 00:10:43.738 Contiguous Queues Required: Yes 00:10:43.738 Arbitration Mechanisms Supported 00:10:43.738 Weighted Round Robin: Not Supported 00:10:43.738 Vendor Specific: Not Supported 00:10:43.738 Reset Timeout: 7500 ms 00:10:43.738 Doorbell Stride: 4 bytes 00:10:43.738 NVM Subsystem Reset: Not Supported 00:10:43.738 Command Sets Supported 00:10:43.738 NVM Command Set: Supported 00:10:43.738 Boot Partition: Not Supported 00:10:43.738 Memory Page Size Minimum: 4096 bytes 00:10:43.738 Memory Page Size Maximum: 65536 bytes 00:10:43.738 Persistent Memory Region: Not Supported 00:10:43.738 Optional Asynchronous Events Supported 00:10:43.738 Namespace Attribute Notices: Supported 00:10:43.738 Firmware Activation Notices: Not Supported 00:10:43.738 ANA Change Notices: Not Supported 00:10:43.738 PLE Aggregate Log Change Notices: Not Supported 00:10:43.738 LBA Status Info Alert Notices: Not Supported 00:10:43.738 EGE Aggregate Log Change Notices: Not Supported 00:10:43.738 Normal NVM Subsystem Shutdown event: Not Supported 00:10:43.738 Zone Descriptor Change Notices: Not Supported 00:10:43.738 Discovery Log Change Notices: Not Supported 00:10:43.738 Controller Attributes 00:10:43.738 128-bit Host Identifier: Not Supported 00:10:43.738 Non-Operational Permissive Mode: Not Supported 00:10:43.738 NVM Sets: Not Supported 00:10:43.738 Read Recovery Levels: Not Supported 00:10:43.738 Endurance Groups: Not Supported 00:10:43.738 Predictable Latency Mode: Not Supported 00:10:43.738 Traffic Based Keep ALive: Not Supported 00:10:43.738 Namespace Granularity: Not Supported 00:10:43.738 SQ Associations: Not Supported 00:10:43.738 UUID List: Not Supported 00:10:43.738 Multi-Domain Subsystem: Not Supported 00:10:43.738 Fixed Capacity Management: Not Supported 00:10:43.738 Variable Capacity Management: Not Supported 00:10:43.738 Delete Endurance Group: Not Supported 00:10:43.738 Delete NVM Set: Not Supported 00:10:43.738 Extended LBA Formats Supported: Supported 00:10:43.738 Flexible Data Placement Supported: Not Supported 00:10:43.738 00:10:43.738 Controller Memory Buffer Support 00:10:43.738 ================================ 00:10:43.738 Supported: No 00:10:43.738 00:10:43.738 Persistent Memory Region Support 00:10:43.738 ================================ 00:10:43.738 Supported: No 00:10:43.738 00:10:43.738 Admin Command Set Attributes 00:10:43.738 ============================ 00:10:43.738 Security Send/Receive: Not Supported 00:10:43.738 Format NVM: Supported 00:10:43.738 Firmware Activate/Download: Not Supported 00:10:43.738 Namespace Management: Supported 00:10:43.738 Device Self-Test: Not Supported 00:10:43.738 Directives: Supported 00:10:43.738 NVMe-MI: Not Supported 00:10:43.738 Virtualization Management: Not Supported 00:10:43.738 Doorbell Buffer Config: Supported 00:10:43.738 Get LBA Status Capability: Not Supported 00:10:43.738 Command & Feature Lockdown Capability: Not Supported 00:10:43.738 Abort Command Limit: 4 00:10:43.738 Async Event Request Limit: 4 00:10:43.738 Number of Firmware Slots: N/A 00:10:43.738 Firmware Slot 1 Read-Only: N/A 00:10:43.738 Firmware Activation Without Reset: N/A 00:10:43.738 Multiple Update Detection Support: N/A 00:10:43.738 Firmware Update Granularity: No Information Provided 00:10:43.738 Per-Namespace SMART Log: Yes 00:10:43.738 Asymmetric Namespace Access Log Page: Not Supported 00:10:43.738 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:43.738 Command Effects Log Page: Supported 00:10:43.738 Get Log Page Extended Data: Supported 00:10:43.738 Telemetry Log Pages: Not Supported 00:10:43.738 Persistent Event Log Pages: Not Supported 00:10:43.738 Supported Log Pages Log Page: May Support 00:10:43.738 Commands Supported & Effects Log Page: Not Supported 00:10:43.738 Feature Identifiers & Effects Log Page:May Support 00:10:43.738 NVMe-MI Commands & Effects Log Page: May Support 00:10:43.738 Data Area 4 for Telemetry Log: Not Supported 00:10:43.738 Error Log Page Entries Supported: 1 00:10:43.738 Keep Alive: Not Supported 00:10:43.738 00:10:43.738 NVM Command Set Attributes 00:10:43.738 ========================== 00:10:43.738 Submission Queue Entry Size 00:10:43.738 Max: 64 00:10:43.738 Min: 64 00:10:43.738 Completion Queue Entry Size 00:10:43.738 Max: 16 00:10:43.738 Min: 16 00:10:43.738 Number of Namespaces: 256 00:10:43.738 Compare Command: Supported 00:10:43.738 Write Uncorrectable Command: Not Supported 00:10:43.738 Dataset Management Command: Supported 00:10:43.738 Write Zeroes Command: Supported 00:10:43.738 Set Features Save Field: Supported 00:10:43.738 Reservations: Not Supported 00:10:43.738 Timestamp: Supported 00:10:43.738 Copy: Supported 00:10:43.738 Volatile Write Cache: Present 00:10:43.738 Atomic Write Unit (Normal): 1 00:10:43.738 Atomic Write Unit (PFail): 1 00:10:43.738 Atomic Compare & Write Unit: 1 00:10:43.738 Fused Compare & Write: Not Supported 00:10:43.738 Scatter-Gather List 00:10:43.738 SGL Command Set: Supported 00:10:43.738 SGL Keyed: Not Supported 00:10:43.738 SGL Bit Bucket Descriptor: Not Supported 00:10:43.738 SGL Metadata Pointer: Not Supported 00:10:43.738 Oversized SGL: Not Supported 00:10:43.738 SGL Metadata Address: Not Supported 00:10:43.738 SGL Offset: Not Supported 00:10:43.738 Transport SGL Data Block: Not Supported 00:10:43.738 Replay Protected Memory Block: Not Supported 00:10:43.738 00:10:43.738 Firmware Slot Information 00:10:43.738 ========================= 00:10:43.738 Active slot: 1 00:10:43.738 Slot 1 Firmware Revision: 1.0 00:10:43.738 00:10:43.738 00:10:43.738 Commands Supported and Effects 00:10:43.738 ============================== 00:10:43.738 Admin Commands 00:10:43.738 -------------- 00:10:43.738 Delete I/O Submission Queue (00h): Supported 00:10:43.738 Create I/O Submission Queue (01h): Supported 00:10:43.738 Get Log Page (02h): Supported 00:10:43.738 Delete I/O Completion Queue (04h): Supported 00:10:43.738 Create I/O Completion Queue (05h): Supported 00:10:43.738 Identify (06h): Supported 00:10:43.738 Abort (08h): Supported 00:10:43.738 Set Features (09h): Supported 00:10:43.738 Get Features (0Ah): Supported 00:10:43.738 Asynchronous Event Request (0Ch): Supported 00:10:43.738 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:43.738 Directive Send (19h): Supported 00:10:43.738 Directive Receive (1Ah): Supported 00:10:43.738 Virtualization Management (1Ch): Supported 00:10:43.738 Doorbell Buffer Config (7Ch): Supported 00:10:43.738 Format NVM (80h): Supported LBA-Change 00:10:43.738 I/O Commands 00:10:43.738 ------------ 00:10:43.738 Flush (00h): Supported LBA-Change 00:10:43.738 Write (01h): Supported LBA-Change 00:10:43.738 Read (02h): Supported 00:10:43.738 Compare (05h): Supported 00:10:43.738 Write Zeroes (08h): Supported LBA-Change 00:10:43.738 Dataset Management (09h): Supported LBA-Change 00:10:43.738 Unknown (0Ch): Supported 00:10:43.738 Unknown (12h): Supported 00:10:43.738 Copy (19h): Supported LBA-Change 00:10:43.738 Unknown (1Dh): Supported LBA-Change 00:10:43.738 00:10:43.738 Error Log 00:10:43.738 ========= 00:10:43.738 00:10:43.738 Arbitration 00:10:43.738 =========== 00:10:43.739 Arbitration Burst: no limit 00:10:43.739 00:10:43.739 Power Management 00:10:43.739 ================ 00:10:43.739 Number of Power States: 1 00:10:43.739 Current Power State: Power State #0 00:10:43.739 Power State #0: 00:10:43.739 Max Power: 25.00 W 00:10:43.739 Non-Operational State: Operational 00:10:43.739 Entry Latency: 16 microseconds 00:10:43.739 Exit Latency: 4 microseconds 00:10:43.739 Relative Read Throughput: 0 00:10:43.739 Relative Read Latency: 0 00:10:43.739 Relative Write Throughput: 0 00:10:43.739 Relative Write Latency: 0 00:10:43.739 Idle Power: Not Reported 00:10:43.739 Active Power: Not Reported 00:10:43.739 Non-Operational Permissive Mode: Not Supported 00:10:43.739 00:10:43.739 Health Information 00:10:43.739 ================== 00:10:43.739 Critical Warnings: 00:10:43.739 Available Spare Space: OK 00:10:43.739 Temperature: [2024-12-10 21:41:51.415753] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 65502 terminated unexpected 00:10:43.739 OK 00:10:43.739 Device Reliability: OK 00:10:43.739 Read Only: No 00:10:43.739 Volatile Memory Backup: OK 00:10:43.739 Current Temperature: 323 Kelvin (50 Celsius) 00:10:43.739 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:43.739 Available Spare: 0% 00:10:43.739 Available Spare Threshold: 0% 00:10:43.739 Life Percentage Used: 0% 00:10:43.739 Data Units Read: 1142 00:10:43.739 Data Units Written: 1008 00:10:43.739 Host Read Commands: 58126 00:10:43.739 Host Write Commands: 56907 00:10:43.739 Controller Busy Time: 0 minutes 00:10:43.739 Power Cycles: 0 00:10:43.739 Power On Hours: 0 hours 00:10:43.739 Unsafe Shutdowns: 0 00:10:43.739 Unrecoverable Media Errors: 0 00:10:43.739 Lifetime Error Log Entries: 0 00:10:43.739 Warning Temperature Time: 0 minutes 00:10:43.739 Critical Temperature Time: 0 minutes 00:10:43.739 00:10:43.739 Number of Queues 00:10:43.739 ================ 00:10:43.739 Number of I/O Submission Queues: 64 00:10:43.739 Number of I/O Completion Queues: 64 00:10:43.739 00:10:43.739 ZNS Specific Controller Data 00:10:43.739 ============================ 00:10:43.739 Zone Append Size Limit: 0 00:10:43.739 00:10:43.739 00:10:43.739 Active Namespaces 00:10:43.739 ================= 00:10:43.739 Namespace ID:1 00:10:43.739 Error Recovery Timeout: Unlimited 00:10:43.739 Command Set Identifier: NVM (00h) 00:10:43.739 Deallocate: Supported 00:10:43.739 Deallocated/Unwritten Error: Supported 00:10:43.739 Deallocated Read Value: All 0x00 00:10:43.739 Deallocate in Write Zeroes: Not Supported 00:10:43.739 Deallocated Guard Field: 0xFFFF 00:10:43.739 Flush: Supported 00:10:43.739 Reservation: Not Supported 00:10:43.739 Namespace Sharing Capabilities: Private 00:10:43.739 Size (in LBAs): 1310720 (5GiB) 00:10:43.739 Capacity (in LBAs): 1310720 (5GiB) 00:10:43.739 Utilization (in LBAs): 1310720 (5GiB) 00:10:43.739 Thin Provisioning: Not Supported 00:10:43.739 Per-NS Atomic Units: No 00:10:43.739 Maximum Single Source Range Length: 128 00:10:43.739 Maximum Copy Length: 128 00:10:43.739 Maximum Source Range Count: 128 00:10:43.739 NGUID/EUI64 Never Reused: No 00:10:43.739 Namespace Write Protected: No 00:10:43.739 Number of LBA Formats: 8 00:10:43.739 Current LBA Format: LBA Format #04 00:10:43.739 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:43.739 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:43.739 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:43.739 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:43.739 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:43.739 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:43.739 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:43.739 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:43.739 00:10:43.739 NVM Specific Namespace Data 00:10:43.739 =========================== 00:10:43.739 Logical Block Storage Tag Mask: 0 00:10:43.739 Protection Information Capabilities: 00:10:43.739 16b Guard Protection Information Storage Tag Support: No 00:10:43.739 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:43.739 Storage Tag Check Read Support: No 00:10:43.739 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.739 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.739 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.739 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.739 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.739 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.739 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.739 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.739 ===================================================== 00:10:43.739 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:43.739 ===================================================== 00:10:43.739 Controller Capabilities/Features 00:10:43.739 ================================ 00:10:43.739 Vendor ID: 1b36 00:10:43.739 Subsystem Vendor ID: 1af4 00:10:43.739 Serial Number: 12343 00:10:43.739 Model Number: QEMU NVMe Ctrl 00:10:43.739 Firmware Version: 8.0.0 00:10:43.739 Recommended Arb Burst: 6 00:10:43.739 IEEE OUI Identifier: 00 54 52 00:10:43.739 Multi-path I/O 00:10:43.739 May have multiple subsystem ports: No 00:10:43.739 May have multiple controllers: Yes 00:10:43.739 Associated with SR-IOV VF: No 00:10:43.739 Max Data Transfer Size: 524288 00:10:43.739 Max Number of Namespaces: 256 00:10:43.739 Max Number of I/O Queues: 64 00:10:43.739 NVMe Specification Version (VS): 1.4 00:10:43.739 NVMe Specification Version (Identify): 1.4 00:10:43.739 Maximum Queue Entries: 2048 00:10:43.739 Contiguous Queues Required: Yes 00:10:43.739 Arbitration Mechanisms Supported 00:10:43.739 Weighted Round Robin: Not Supported 00:10:43.739 Vendor Specific: Not Supported 00:10:43.739 Reset Timeout: 7500 ms 00:10:43.739 Doorbell Stride: 4 bytes 00:10:43.739 NVM Subsystem Reset: Not Supported 00:10:43.739 Command Sets Supported 00:10:43.739 NVM Command Set: Supported 00:10:43.739 Boot Partition: Not Supported 00:10:43.739 Memory Page Size Minimum: 4096 bytes 00:10:43.739 Memory Page Size Maximum: 65536 bytes 00:10:43.739 Persistent Memory Region: Not Supported 00:10:43.739 Optional Asynchronous Events Supported 00:10:43.739 Namespace Attribute Notices: Supported 00:10:43.739 Firmware Activation Notices: Not Supported 00:10:43.739 ANA Change Notices: Not Supported 00:10:43.739 PLE Aggregate Log Change Notices: Not Supported 00:10:43.739 LBA Status Info Alert Notices: Not Supported 00:10:43.739 EGE Aggregate Log Change Notices: Not Supported 00:10:43.739 Normal NVM Subsystem Shutdown event: Not Supported 00:10:43.739 Zone Descriptor Change Notices: Not Supported 00:10:43.739 Discovery Log Change Notices: Not Supported 00:10:43.739 Controller Attributes 00:10:43.739 128-bit Host Identifier: Not Supported 00:10:43.739 Non-Operational Permissive Mode: Not Supported 00:10:43.739 NVM Sets: Not Supported 00:10:43.739 Read Recovery Levels: Not Supported 00:10:43.739 Endurance Groups: Supported 00:10:43.739 Predictable Latency Mode: Not Supported 00:10:43.739 Traffic Based Keep ALive: Not Supported 00:10:43.739 Namespace Granularity: Not Supported 00:10:43.739 SQ Associations: Not Supported 00:10:43.739 UUID List: Not Supported 00:10:43.739 Multi-Domain Subsystem: Not Supported 00:10:43.739 Fixed Capacity Management: Not Supported 00:10:43.739 Variable Capacity Management: Not Supported 00:10:43.739 Delete Endurance Group: Not Supported 00:10:43.739 Delete NVM Set: Not Supported 00:10:43.739 Extended LBA Formats Supported: Supported 00:10:43.739 Flexible Data Placement Supported: Supported 00:10:43.739 00:10:43.739 Controller Memory Buffer Support 00:10:43.739 ================================ 00:10:43.739 Supported: No 00:10:43.739 00:10:43.739 Persistent Memory Region Support 00:10:43.739 ================================ 00:10:43.739 Supported: No 00:10:43.739 00:10:43.739 Admin Command Set Attributes 00:10:43.739 ============================ 00:10:43.739 Security Send/Receive: Not Supported 00:10:43.739 Format NVM: Supported 00:10:43.739 Firmware Activate/Download: Not Supported 00:10:43.739 Namespace Management: Supported 00:10:43.739 Device Self-Test: Not Supported 00:10:43.739 Directives: Supported 00:10:43.739 NVMe-MI: Not Supported 00:10:43.739 Virtualization Management: Not Supported 00:10:43.739 Doorbell Buffer Config: Supported 00:10:43.739 Get LBA Status Capability: Not Supported 00:10:43.739 Command & Feature Lockdown Capability: Not Supported 00:10:43.739 Abort Command Limit: 4 00:10:43.739 Async Event Request Limit: 4 00:10:43.739 Number of Firmware Slots: N/A 00:10:43.739 Firmware Slot 1 Read-Only: N/A 00:10:43.739 Firmware Activation Without Reset: N/A 00:10:43.739 Multiple Update Detection Support: N/A 00:10:43.739 Firmware Update Granularity: No Information Provided 00:10:43.739 Per-Namespace SMART Log: Yes 00:10:43.739 Asymmetric Namespace Access Log Page: Not Supported 00:10:43.739 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:43.739 Command Effects Log Page: Supported 00:10:43.740 Get Log Page Extended Data: Supported 00:10:43.740 Telemetry Log Pages: Not Supported 00:10:43.740 Persistent Event Log Pages: Not Supported 00:10:43.740 Supported Log Pages Log Page: May Support 00:10:43.740 Commands Supported & Effects Log Page: Not Supported 00:10:43.740 Feature Identifiers & Effects Log Page:May Support 00:10:43.740 NVMe-MI Commands & Effects Log Page: May Support 00:10:43.740 Data Area 4 for Telemetry Log: Not Supported 00:10:43.740 Error Log Page Entries Supported: 1 00:10:43.740 Keep Alive: Not Supported 00:10:43.740 00:10:43.740 NVM Command Set Attributes 00:10:43.740 ========================== 00:10:43.740 Submission Queue Entry Size 00:10:43.740 Max: 64 00:10:43.740 Min: 64 00:10:43.740 Completion Queue Entry Size 00:10:43.740 Max: 16 00:10:43.740 Min: 16 00:10:43.740 Number of Namespaces: 256 00:10:43.740 Compare Command: Supported 00:10:43.740 Write Uncorrectable Command: Not Supported 00:10:43.740 Dataset Management Command: Supported 00:10:43.740 Write Zeroes Command: Supported 00:10:43.740 Set Features Save Field: Supported 00:10:43.740 Reservations: Not Supported 00:10:43.740 Timestamp: Supported 00:10:43.740 Copy: Supported 00:10:43.740 Volatile Write Cache: Present 00:10:43.740 Atomic Write Unit (Normal): 1 00:10:43.740 Atomic Write Unit (PFail): 1 00:10:43.740 Atomic Compare & Write Unit: 1 00:10:43.740 Fused Compare & Write: Not Supported 00:10:43.740 Scatter-Gather List 00:10:43.740 SGL Command Set: Supported 00:10:43.740 SGL Keyed: Not Supported 00:10:43.740 SGL Bit Bucket Descriptor: Not Supported 00:10:43.740 SGL Metadata Pointer: Not Supported 00:10:43.740 Oversized SGL: Not Supported 00:10:43.740 SGL Metadata Address: Not Supported 00:10:43.740 SGL Offset: Not Supported 00:10:43.740 Transport SGL Data Block: Not Supported 00:10:43.740 Replay Protected Memory Block: Not Supported 00:10:43.740 00:10:43.740 Firmware Slot Information 00:10:43.740 ========================= 00:10:43.740 Active slot: 1 00:10:43.740 Slot 1 Firmware Revision: 1.0 00:10:43.740 00:10:43.740 00:10:43.740 Commands Supported and Effects 00:10:43.740 ============================== 00:10:43.740 Admin Commands 00:10:43.740 -------------- 00:10:43.740 Delete I/O Submission Queue (00h): Supported 00:10:43.740 Create I/O Submission Queue (01h): Supported 00:10:43.740 Get Log Page (02h): Supported 00:10:43.740 Delete I/O Completion Queue (04h): Supported 00:10:43.740 Create I/O Completion Queue (05h): Supported 00:10:43.740 Identify (06h): Supported 00:10:43.740 Abort (08h): Supported 00:10:43.740 Set Features (09h): Supported 00:10:43.740 Get Features (0Ah): Supported 00:10:43.740 Asynchronous Event Request (0Ch): Supported 00:10:43.740 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:43.740 Directive Send (19h): Supported 00:10:43.740 Directive Receive (1Ah): Supported 00:10:43.740 Virtualization Management (1Ch): Supported 00:10:43.740 Doorbell Buffer Config (7Ch): Supported 00:10:43.740 Format NVM (80h): Supported LBA-Change 00:10:43.740 I/O Commands 00:10:43.740 ------------ 00:10:43.740 Flush (00h): Supported LBA-Change 00:10:43.740 Write (01h): Supported LBA-Change 00:10:43.740 Read (02h): Supported 00:10:43.740 Compare (05h): Supported 00:10:43.740 Write Zeroes (08h): Supported LBA-Change 00:10:43.740 Dataset Management (09h): Supported LBA-Change 00:10:43.740 Unknown (0Ch): Supported 00:10:43.740 Unknown (12h): Supported 00:10:43.740 Copy (19h): Supported LBA-Change 00:10:43.740 Unknown (1Dh): Supported LBA-Change 00:10:43.740 00:10:43.740 Error Log 00:10:43.740 ========= 00:10:43.740 00:10:43.740 Arbitration 00:10:43.740 =========== 00:10:43.740 Arbitration Burst: no limit 00:10:43.740 00:10:43.740 Power Management 00:10:43.740 ================ 00:10:43.740 Number of Power States: 1 00:10:43.740 Current Power State: Power State #0 00:10:43.740 Power State #0: 00:10:43.740 Max Power: 25.00 W 00:10:43.740 Non-Operational State: Operational 00:10:43.740 Entry Latency: 16 microseconds 00:10:43.740 Exit Latency: 4 microseconds 00:10:43.740 Relative Read Throughput: 0 00:10:43.740 Relative Read Latency: 0 00:10:43.740 Relative Write Throughput: 0 00:10:43.740 Relative Write Latency: 0 00:10:43.740 Idle Power: Not Reported 00:10:43.740 Active Power: Not Reported 00:10:43.740 Non-Operational Permissive Mode: Not Supported 00:10:43.740 00:10:43.740 Health Information 00:10:43.740 ================== 00:10:43.740 Critical Warnings: 00:10:43.740 Available Spare Space: OK 00:10:43.740 Temperature: OK 00:10:43.740 Device Reliability: OK 00:10:43.740 Read Only: No 00:10:43.740 Volatile Memory Backup: OK 00:10:43.740 Current Temperature: 323 Kelvin (50 Celsius) 00:10:43.740 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:43.740 Available Spare: 0% 00:10:43.740 Available Spare Threshold: 0% 00:10:43.740 Life Percentage Used: 0% 00:10:43.740 Data Units Read: 927 00:10:43.740 Data Units Written: 856 00:10:43.740 Host Read Commands: 41097 00:10:43.740 Host Write Commands: 40520 00:10:43.740 Controller Busy Time: 0 minutes 00:10:43.740 Power Cycles: 0 00:10:43.740 Power On Hours: 0 hours 00:10:43.740 Unsafe Shutdowns: 0 00:10:43.740 Unrecoverable Media Errors: 0 00:10:43.740 Lifetime Error Log Entries: 0 00:10:43.740 Warning Temperature Time: 0 minutes 00:10:43.740 Critical Temperature Time: 0 minutes 00:10:43.740 00:10:43.740 Number of Queues 00:10:43.740 ================ 00:10:43.740 Number of I/O Submission Queues: 64 00:10:43.740 Number of I/O Completion Queues: 64 00:10:43.740 00:10:43.740 ZNS Specific Controller Data 00:10:43.740 ============================ 00:10:43.740 Zone Append Size Limit: 0 00:10:43.740 00:10:43.740 00:10:43.740 Active Namespaces 00:10:43.740 ================= 00:10:43.740 Namespace ID:1 00:10:43.740 Error Recovery Timeout: Unlimited 00:10:43.740 Command Set Identifier: NVM (00h) 00:10:43.740 Deallocate: Supported 00:10:43.740 Deallocated/Unwritten Error: Supported 00:10:43.740 Deallocated Read Value: All 0x00 00:10:43.740 Deallocate in Write Zeroes: Not Supported 00:10:43.740 Deallocated Guard Field: 0xFFFF 00:10:43.740 Flush: Supported 00:10:43.740 Reservation: Not Supported 00:10:43.740 Namespace Sharing Capabilities: Multiple Controllers 00:10:43.740 Size (in LBAs): 262144 (1GiB) 00:10:43.740 Capacity (in LBAs): 262144 (1GiB) 00:10:43.740 Utilization (in LBAs): 262144 (1GiB) 00:10:43.740 Thin Provisioning: Not Supported 00:10:43.740 Per-NS Atomic Units: No 00:10:43.740 Maximum Single Source Range Length: 128 00:10:43.740 Maximum Copy Length: 128 00:10:43.740 Maximum Source Range Count: 128 00:10:43.740 NGUID/EUI64 Never Reused: No 00:10:43.740 Namespace Write Protected: No 00:10:43.740 Endurance group ID: 1 00:10:43.740 Number of LBA Formats: 8 00:10:43.740 Current LBA Format: LBA Format #04 00:10:43.740 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:43.740 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:43.740 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:43.740 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:43.740 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:43.740 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:43.740 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:43.740 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:43.740 00:10:43.740 Get Feature FDP: 00:10:43.740 ================ 00:10:43.740 Enabled: Yes 00:10:43.740 FDP configuration index: 0 00:10:43.740 00:10:43.740 FDP configurations log page 00:10:43.740 =========================== 00:10:43.740 Number of FDP configurations: 1 00:10:43.740 Version: 0 00:10:43.740 Size: 112 00:10:43.740 FDP Configuration Descriptor: 0 00:10:43.740 Descriptor Size: 96 00:10:43.740 Reclaim Group Identifier format: 2 00:10:43.740 FDP Volatile Write Cache: Not Present 00:10:43.740 FDP Configuration: Valid 00:10:43.740 Vendor Specific Size: 0 00:10:43.740 Number of Reclaim Groups: 2 00:10:43.740 Number of Recalim Unit Handles: 8 00:10:43.740 Max Placement Identifiers: 128 00:10:43.740 Number of Namespaces Suppprted: 256 00:10:43.740 Reclaim unit Nominal Size: 6000000 bytes 00:10:43.740 Estimated Reclaim Unit Time Limit: Not Reported 00:10:43.740 RUH Desc #000: RUH Type: Initially Isolated 00:10:43.740 RUH Desc #001: RUH Type: Initially Isolated 00:10:43.740 RUH Desc #002: RUH Type: Initially Isolated 00:10:43.740 RUH Desc #003: RUH Type: Initially Isolated 00:10:43.740 RUH Desc #004: RUH Type: Initially Isolated 00:10:43.740 RUH Desc #005: RUH Type: Initially Isolated 00:10:43.740 RUH Desc #006: RUH Type: Initially Isolated 00:10:43.740 RUH Desc #007: RUH Type: Initially Isolated 00:10:43.740 00:10:43.740 FDP reclaim unit handle usage log page 00:10:43.740 ====================================== 00:10:43.740 Number of Reclaim Unit Handles: 8 00:10:43.740 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:43.740 RUH Usage Desc #001: RUH Attributes: Unused 00:10:43.740 RUH Usage Desc #002: RUH Attributes: Unused 00:10:43.740 RUH Usage Desc #003: RUH Attributes: Unused 00:10:43.741 RUH Usage Desc #004: RUH Attributes: Unused 00:10:43.741 RUH Usage Desc #005: RUH Attributes: Unused 00:10:43.741 RUH Usage Desc #006: RUH Attributes: Unused 00:10:43.741 RUH Usage Desc #007: RUH Attributes: Unused 00:10:43.741 00:10:43.741 FDP statistics log page 00:10:43.741 ======================= 00:10:43.741 Host bytes with metadata written: 552706048 00:10:43.741 Med[2024-12-10 21:41:51.417599] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 65502 terminated unexpected 00:10:43.741 ia bytes with metadata written: 553504768 00:10:43.741 Media bytes erased: 0 00:10:43.741 00:10:43.741 FDP events log page 00:10:43.741 =================== 00:10:43.741 Number of FDP events: 0 00:10:43.741 00:10:43.741 NVM Specific Namespace Data 00:10:43.741 =========================== 00:10:43.741 Logical Block Storage Tag Mask: 0 00:10:43.741 Protection Information Capabilities: 00:10:43.741 16b Guard Protection Information Storage Tag Support: No 00:10:43.741 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:43.741 Storage Tag Check Read Support: No 00:10:43.741 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.741 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.741 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.741 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.741 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.741 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.741 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.741 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.741 ===================================================== 00:10:43.741 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:43.741 ===================================================== 00:10:43.741 Controller Capabilities/Features 00:10:43.741 ================================ 00:10:43.741 Vendor ID: 1b36 00:10:43.741 Subsystem Vendor ID: 1af4 00:10:43.741 Serial Number: 12342 00:10:43.741 Model Number: QEMU NVMe Ctrl 00:10:43.741 Firmware Version: 8.0.0 00:10:43.741 Recommended Arb Burst: 6 00:10:43.741 IEEE OUI Identifier: 00 54 52 00:10:43.741 Multi-path I/O 00:10:43.741 May have multiple subsystem ports: No 00:10:43.741 May have multiple controllers: No 00:10:43.741 Associated with SR-IOV VF: No 00:10:43.741 Max Data Transfer Size: 524288 00:10:43.741 Max Number of Namespaces: 256 00:10:43.741 Max Number of I/O Queues: 64 00:10:43.741 NVMe Specification Version (VS): 1.4 00:10:43.741 NVMe Specification Version (Identify): 1.4 00:10:43.741 Maximum Queue Entries: 2048 00:10:43.741 Contiguous Queues Required: Yes 00:10:43.741 Arbitration Mechanisms Supported 00:10:43.741 Weighted Round Robin: Not Supported 00:10:43.741 Vendor Specific: Not Supported 00:10:43.741 Reset Timeout: 7500 ms 00:10:43.741 Doorbell Stride: 4 bytes 00:10:43.741 NVM Subsystem Reset: Not Supported 00:10:43.741 Command Sets Supported 00:10:43.741 NVM Command Set: Supported 00:10:43.741 Boot Partition: Not Supported 00:10:43.741 Memory Page Size Minimum: 4096 bytes 00:10:43.741 Memory Page Size Maximum: 65536 bytes 00:10:43.741 Persistent Memory Region: Not Supported 00:10:43.741 Optional Asynchronous Events Supported 00:10:43.741 Namespace Attribute Notices: Supported 00:10:43.741 Firmware Activation Notices: Not Supported 00:10:43.741 ANA Change Notices: Not Supported 00:10:43.741 PLE Aggregate Log Change Notices: Not Supported 00:10:43.741 LBA Status Info Alert Notices: Not Supported 00:10:43.741 EGE Aggregate Log Change Notices: Not Supported 00:10:43.741 Normal NVM Subsystem Shutdown event: Not Supported 00:10:43.741 Zone Descriptor Change Notices: Not Supported 00:10:43.741 Discovery Log Change Notices: Not Supported 00:10:43.741 Controller Attributes 00:10:43.741 128-bit Host Identifier: Not Supported 00:10:43.741 Non-Operational Permissive Mode: Not Supported 00:10:43.741 NVM Sets: Not Supported 00:10:43.741 Read Recovery Levels: Not Supported 00:10:43.741 Endurance Groups: Not Supported 00:10:43.741 Predictable Latency Mode: Not Supported 00:10:43.741 Traffic Based Keep ALive: Not Supported 00:10:43.741 Namespace Granularity: Not Supported 00:10:43.741 SQ Associations: Not Supported 00:10:43.741 UUID List: Not Supported 00:10:43.741 Multi-Domain Subsystem: Not Supported 00:10:43.741 Fixed Capacity Management: Not Supported 00:10:43.741 Variable Capacity Management: Not Supported 00:10:43.741 Delete Endurance Group: Not Supported 00:10:43.741 Delete NVM Set: Not Supported 00:10:43.741 Extended LBA Formats Supported: Supported 00:10:43.741 Flexible Data Placement Supported: Not Supported 00:10:43.741 00:10:43.741 Controller Memory Buffer Support 00:10:43.741 ================================ 00:10:43.741 Supported: No 00:10:43.741 00:10:43.741 Persistent Memory Region Support 00:10:43.741 ================================ 00:10:43.741 Supported: No 00:10:43.741 00:10:43.741 Admin Command Set Attributes 00:10:43.741 ============================ 00:10:43.741 Security Send/Receive: Not Supported 00:10:43.741 Format NVM: Supported 00:10:43.741 Firmware Activate/Download: Not Supported 00:10:43.741 Namespace Management: Supported 00:10:43.741 Device Self-Test: Not Supported 00:10:43.741 Directives: Supported 00:10:43.741 NVMe-MI: Not Supported 00:10:43.741 Virtualization Management: Not Supported 00:10:43.741 Doorbell Buffer Config: Supported 00:10:43.741 Get LBA Status Capability: Not Supported 00:10:43.741 Command & Feature Lockdown Capability: Not Supported 00:10:43.741 Abort Command Limit: 4 00:10:43.741 Async Event Request Limit: 4 00:10:43.741 Number of Firmware Slots: N/A 00:10:43.741 Firmware Slot 1 Read-Only: N/A 00:10:43.741 Firmware Activation Without Reset: N/A 00:10:43.741 Multiple Update Detection Support: N/A 00:10:43.741 Firmware Update Granularity: No Information Provided 00:10:43.741 Per-Namespace SMART Log: Yes 00:10:43.741 Asymmetric Namespace Access Log Page: Not Supported 00:10:43.741 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:43.741 Command Effects Log Page: Supported 00:10:43.741 Get Log Page Extended Data: Supported 00:10:43.741 Telemetry Log Pages: Not Supported 00:10:43.741 Persistent Event Log Pages: Not Supported 00:10:43.741 Supported Log Pages Log Page: May Support 00:10:43.741 Commands Supported & Effects Log Page: Not Supported 00:10:43.741 Feature Identifiers & Effects Log Page:May Support 00:10:43.741 NVMe-MI Commands & Effects Log Page: May Support 00:10:43.741 Data Area 4 for Telemetry Log: Not Supported 00:10:43.741 Error Log Page Entries Supported: 1 00:10:43.741 Keep Alive: Not Supported 00:10:43.741 00:10:43.741 NVM Command Set Attributes 00:10:43.741 ========================== 00:10:43.741 Submission Queue Entry Size 00:10:43.741 Max: 64 00:10:43.741 Min: 64 00:10:43.741 Completion Queue Entry Size 00:10:43.741 Max: 16 00:10:43.741 Min: 16 00:10:43.741 Number of Namespaces: 256 00:10:43.741 Compare Command: Supported 00:10:43.741 Write Uncorrectable Command: Not Supported 00:10:43.741 Dataset Management Command: Supported 00:10:43.741 Write Zeroes Command: Supported 00:10:43.741 Set Features Save Field: Supported 00:10:43.741 Reservations: Not Supported 00:10:43.741 Timestamp: Supported 00:10:43.741 Copy: Supported 00:10:43.741 Volatile Write Cache: Present 00:10:43.741 Atomic Write Unit (Normal): 1 00:10:43.741 Atomic Write Unit (PFail): 1 00:10:43.741 Atomic Compare & Write Unit: 1 00:10:43.741 Fused Compare & Write: Not Supported 00:10:43.741 Scatter-Gather List 00:10:43.741 SGL Command Set: Supported 00:10:43.741 SGL Keyed: Not Supported 00:10:43.742 SGL Bit Bucket Descriptor: Not Supported 00:10:43.742 SGL Metadata Pointer: Not Supported 00:10:43.742 Oversized SGL: Not Supported 00:10:43.742 SGL Metadata Address: Not Supported 00:10:43.742 SGL Offset: Not Supported 00:10:43.742 Transport SGL Data Block: Not Supported 00:10:43.742 Replay Protected Memory Block: Not Supported 00:10:43.742 00:10:43.742 Firmware Slot Information 00:10:43.742 ========================= 00:10:43.742 Active slot: 1 00:10:43.742 Slot 1 Firmware Revision: 1.0 00:10:43.742 00:10:43.742 00:10:43.742 Commands Supported and Effects 00:10:43.742 ============================== 00:10:43.742 Admin Commands 00:10:43.742 -------------- 00:10:43.742 Delete I/O Submission Queue (00h): Supported 00:10:43.742 Create I/O Submission Queue (01h): Supported 00:10:43.742 Get Log Page (02h): Supported 00:10:43.742 Delete I/O Completion Queue (04h): Supported 00:10:43.742 Create I/O Completion Queue (05h): Supported 00:10:43.742 Identify (06h): Supported 00:10:43.742 Abort (08h): Supported 00:10:43.742 Set Features (09h): Supported 00:10:43.742 Get Features (0Ah): Supported 00:10:43.742 Asynchronous Event Request (0Ch): Supported 00:10:43.742 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:43.742 Directive Send (19h): Supported 00:10:43.742 Directive Receive (1Ah): Supported 00:10:43.742 Virtualization Management (1Ch): Supported 00:10:43.742 Doorbell Buffer Config (7Ch): Supported 00:10:43.742 Format NVM (80h): Supported LBA-Change 00:10:43.742 I/O Commands 00:10:43.742 ------------ 00:10:43.742 Flush (00h): Supported LBA-Change 00:10:43.742 Write (01h): Supported LBA-Change 00:10:43.742 Read (02h): Supported 00:10:43.742 Compare (05h): Supported 00:10:43.742 Write Zeroes (08h): Supported LBA-Change 00:10:43.742 Dataset Management (09h): Supported LBA-Change 00:10:43.742 Unknown (0Ch): Supported 00:10:43.742 Unknown (12h): Supported 00:10:43.742 Copy (19h): Supported LBA-Change 00:10:43.742 Unknown (1Dh): Supported LBA-Change 00:10:43.742 00:10:43.742 Error Log 00:10:43.742 ========= 00:10:43.742 00:10:43.742 Arbitration 00:10:43.742 =========== 00:10:43.742 Arbitration Burst: no limit 00:10:43.742 00:10:43.742 Power Management 00:10:43.742 ================ 00:10:43.742 Number of Power States: 1 00:10:43.742 Current Power State: Power State #0 00:10:43.742 Power State #0: 00:10:43.742 Max Power: 25.00 W 00:10:43.742 Non-Operational State: Operational 00:10:43.742 Entry Latency: 16 microseconds 00:10:43.742 Exit Latency: 4 microseconds 00:10:43.742 Relative Read Throughput: 0 00:10:43.742 Relative Read Latency: 0 00:10:43.742 Relative Write Throughput: 0 00:10:43.742 Relative Write Latency: 0 00:10:43.742 Idle Power: Not Reported 00:10:43.742 Active Power: Not Reported 00:10:43.742 Non-Operational Permissive Mode: Not Supported 00:10:43.742 00:10:43.742 Health Information 00:10:43.742 ================== 00:10:43.742 Critical Warnings: 00:10:43.742 Available Spare Space: OK 00:10:43.742 Temperature: OK 00:10:43.742 Device Reliability: OK 00:10:43.742 Read Only: No 00:10:43.742 Volatile Memory Backup: OK 00:10:43.742 Current Temperature: 323 Kelvin (50 Celsius) 00:10:43.742 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:43.742 Available Spare: 0% 00:10:43.742 Available Spare Threshold: 0% 00:10:43.742 Life Percentage Used: 0% 00:10:43.742 Data Units Read: 2530 00:10:43.742 Data Units Written: 2317 00:10:43.742 Host Read Commands: 121043 00:10:43.742 Host Write Commands: 119312 00:10:43.742 Controller Busy Time: 0 minutes 00:10:43.742 Power Cycles: 0 00:10:43.742 Power On Hours: 0 hours 00:10:43.742 Unsafe Shutdowns: 0 00:10:43.742 Unrecoverable Media Errors: 0 00:10:43.742 Lifetime Error Log Entries: 0 00:10:43.742 Warning Temperature Time: 0 minutes 00:10:43.742 Critical Temperature Time: 0 minutes 00:10:43.742 00:10:43.742 Number of Queues 00:10:43.742 ================ 00:10:43.742 Number of I/O Submission Queues: 64 00:10:43.742 Number of I/O Completion Queues: 64 00:10:43.742 00:10:43.742 ZNS Specific Controller Data 00:10:43.742 ============================ 00:10:43.742 Zone Append Size Limit: 0 00:10:43.742 00:10:43.742 00:10:43.742 Active Namespaces 00:10:43.742 ================= 00:10:43.742 Namespace ID:1 00:10:43.742 Error Recovery Timeout: Unlimited 00:10:43.742 Command Set Identifier: NVM (00h) 00:10:43.742 Deallocate: Supported 00:10:43.742 Deallocated/Unwritten Error: Supported 00:10:43.742 Deallocated Read Value: All 0x00 00:10:43.742 Deallocate in Write Zeroes: Not Supported 00:10:43.742 Deallocated Guard Field: 0xFFFF 00:10:43.742 Flush: Supported 00:10:43.742 Reservation: Not Supported 00:10:43.742 Namespace Sharing Capabilities: Private 00:10:43.742 Size (in LBAs): 1048576 (4GiB) 00:10:43.742 Capacity (in LBAs): 1048576 (4GiB) 00:10:43.742 Utilization (in LBAs): 1048576 (4GiB) 00:10:43.742 Thin Provisioning: Not Supported 00:10:43.742 Per-NS Atomic Units: No 00:10:43.742 Maximum Single Source Range Length: 128 00:10:43.742 Maximum Copy Length: 128 00:10:43.742 Maximum Source Range Count: 128 00:10:43.742 NGUID/EUI64 Never Reused: No 00:10:43.742 Namespace Write Protected: No 00:10:43.742 Number of LBA Formats: 8 00:10:43.742 Current LBA Format: LBA Format #04 00:10:43.742 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:43.742 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:43.742 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:43.742 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:43.742 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:43.742 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:43.742 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:43.742 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:43.742 00:10:43.742 NVM Specific Namespace Data 00:10:43.742 =========================== 00:10:43.742 Logical Block Storage Tag Mask: 0 00:10:43.742 Protection Information Capabilities: 00:10:43.742 16b Guard Protection Information Storage Tag Support: No 00:10:43.742 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:43.742 Storage Tag Check Read Support: No 00:10:43.742 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.742 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.742 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.742 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.742 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.742 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.742 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.742 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.742 Namespace ID:2 00:10:43.742 Error Recovery Timeout: Unlimited 00:10:43.742 Command Set Identifier: NVM (00h) 00:10:43.742 Deallocate: Supported 00:10:43.742 Deallocated/Unwritten Error: Supported 00:10:43.742 Deallocated Read Value: All 0x00 00:10:43.742 Deallocate in Write Zeroes: Not Supported 00:10:43.742 Deallocated Guard Field: 0xFFFF 00:10:43.742 Flush: Supported 00:10:43.742 Reservation: Not Supported 00:10:43.742 Namespace Sharing Capabilities: Private 00:10:43.742 Size (in LBAs): 1048576 (4GiB) 00:10:43.742 Capacity (in LBAs): 1048576 (4GiB) 00:10:43.742 Utilization (in LBAs): 1048576 (4GiB) 00:10:43.742 Thin Provisioning: Not Supported 00:10:43.742 Per-NS Atomic Units: No 00:10:43.742 Maximum Single Source Range Length: 128 00:10:43.742 Maximum Copy Length: 128 00:10:43.742 Maximum Source Range Count: 128 00:10:43.742 NGUID/EUI64 Never Reused: No 00:10:43.742 Namespace Write Protected: No 00:10:43.742 Number of LBA Formats: 8 00:10:43.742 Current LBA Format: LBA Format #04 00:10:43.742 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:43.742 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:43.742 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:43.742 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:43.742 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:43.742 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:43.742 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:43.742 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:43.742 00:10:43.742 NVM Specific Namespace Data 00:10:43.742 =========================== 00:10:43.742 Logical Block Storage Tag Mask: 0 00:10:43.742 Protection Information Capabilities: 00:10:43.742 16b Guard Protection Information Storage Tag Support: No 00:10:43.742 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:43.742 Storage Tag Check Read Support: No 00:10:43.742 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.742 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.742 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.742 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.742 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.742 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.742 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.743 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.743 Namespace ID:3 00:10:43.743 Error Recovery Timeout: Unlimited 00:10:43.743 Command Set Identifier: NVM (00h) 00:10:43.743 Deallocate: Supported 00:10:43.743 Deallocated/Unwritten Error: Supported 00:10:43.743 Deallocated Read Value: All 0x00 00:10:43.743 Deallocate in Write Zeroes: Not Supported 00:10:43.743 Deallocated Guard Field: 0xFFFF 00:10:43.743 Flush: Supported 00:10:43.743 Reservation: Not Supported 00:10:43.743 Namespace Sharing Capabilities: Private 00:10:43.743 Size (in LBAs): 1048576 (4GiB) 00:10:43.743 Capacity (in LBAs): 1048576 (4GiB) 00:10:43.743 Utilization (in LBAs): 1048576 (4GiB) 00:10:43.743 Thin Provisioning: Not Supported 00:10:43.743 Per-NS Atomic Units: No 00:10:43.743 Maximum Single Source Range Length: 128 00:10:43.743 Maximum Copy Length: 128 00:10:43.743 Maximum Source Range Count: 128 00:10:43.743 NGUID/EUI64 Never Reused: No 00:10:43.743 Namespace Write Protected: No 00:10:43.743 Number of LBA Formats: 8 00:10:43.743 Current LBA Format: LBA Format #04 00:10:43.743 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:43.743 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:43.743 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:43.743 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:43.743 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:43.743 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:43.743 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:43.743 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:43.743 00:10:43.743 NVM Specific Namespace Data 00:10:43.743 =========================== 00:10:43.743 Logical Block Storage Tag Mask: 0 00:10:43.743 Protection Information Capabilities: 00:10:43.743 16b Guard Protection Information Storage Tag Support: No 00:10:43.743 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:43.743 Storage Tag Check Read Support: No 00:10:43.743 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.743 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.743 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.743 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.743 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.743 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.743 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.743 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:43.743 21:41:51 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:43.743 21:41:51 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:10:44.310 ===================================================== 00:10:44.310 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:44.310 ===================================================== 00:10:44.310 Controller Capabilities/Features 00:10:44.310 ================================ 00:10:44.310 Vendor ID: 1b36 00:10:44.310 Subsystem Vendor ID: 1af4 00:10:44.310 Serial Number: 12340 00:10:44.310 Model Number: QEMU NVMe Ctrl 00:10:44.310 Firmware Version: 8.0.0 00:10:44.310 Recommended Arb Burst: 6 00:10:44.310 IEEE OUI Identifier: 00 54 52 00:10:44.310 Multi-path I/O 00:10:44.310 May have multiple subsystem ports: No 00:10:44.310 May have multiple controllers: No 00:10:44.310 Associated with SR-IOV VF: No 00:10:44.310 Max Data Transfer Size: 524288 00:10:44.310 Max Number of Namespaces: 256 00:10:44.310 Max Number of I/O Queues: 64 00:10:44.310 NVMe Specification Version (VS): 1.4 00:10:44.310 NVMe Specification Version (Identify): 1.4 00:10:44.310 Maximum Queue Entries: 2048 00:10:44.310 Contiguous Queues Required: Yes 00:10:44.310 Arbitration Mechanisms Supported 00:10:44.310 Weighted Round Robin: Not Supported 00:10:44.310 Vendor Specific: Not Supported 00:10:44.310 Reset Timeout: 7500 ms 00:10:44.310 Doorbell Stride: 4 bytes 00:10:44.310 NVM Subsystem Reset: Not Supported 00:10:44.310 Command Sets Supported 00:10:44.310 NVM Command Set: Supported 00:10:44.310 Boot Partition: Not Supported 00:10:44.310 Memory Page Size Minimum: 4096 bytes 00:10:44.310 Memory Page Size Maximum: 65536 bytes 00:10:44.310 Persistent Memory Region: Not Supported 00:10:44.310 Optional Asynchronous Events Supported 00:10:44.310 Namespace Attribute Notices: Supported 00:10:44.310 Firmware Activation Notices: Not Supported 00:10:44.310 ANA Change Notices: Not Supported 00:10:44.310 PLE Aggregate Log Change Notices: Not Supported 00:10:44.310 LBA Status Info Alert Notices: Not Supported 00:10:44.310 EGE Aggregate Log Change Notices: Not Supported 00:10:44.310 Normal NVM Subsystem Shutdown event: Not Supported 00:10:44.310 Zone Descriptor Change Notices: Not Supported 00:10:44.310 Discovery Log Change Notices: Not Supported 00:10:44.310 Controller Attributes 00:10:44.310 128-bit Host Identifier: Not Supported 00:10:44.310 Non-Operational Permissive Mode: Not Supported 00:10:44.310 NVM Sets: Not Supported 00:10:44.310 Read Recovery Levels: Not Supported 00:10:44.310 Endurance Groups: Not Supported 00:10:44.310 Predictable Latency Mode: Not Supported 00:10:44.310 Traffic Based Keep ALive: Not Supported 00:10:44.310 Namespace Granularity: Not Supported 00:10:44.310 SQ Associations: Not Supported 00:10:44.310 UUID List: Not Supported 00:10:44.310 Multi-Domain Subsystem: Not Supported 00:10:44.310 Fixed Capacity Management: Not Supported 00:10:44.310 Variable Capacity Management: Not Supported 00:10:44.310 Delete Endurance Group: Not Supported 00:10:44.310 Delete NVM Set: Not Supported 00:10:44.310 Extended LBA Formats Supported: Supported 00:10:44.310 Flexible Data Placement Supported: Not Supported 00:10:44.310 00:10:44.310 Controller Memory Buffer Support 00:10:44.310 ================================ 00:10:44.310 Supported: No 00:10:44.310 00:10:44.310 Persistent Memory Region Support 00:10:44.310 ================================ 00:10:44.310 Supported: No 00:10:44.310 00:10:44.310 Admin Command Set Attributes 00:10:44.310 ============================ 00:10:44.310 Security Send/Receive: Not Supported 00:10:44.310 Format NVM: Supported 00:10:44.310 Firmware Activate/Download: Not Supported 00:10:44.310 Namespace Management: Supported 00:10:44.310 Device Self-Test: Not Supported 00:10:44.310 Directives: Supported 00:10:44.310 NVMe-MI: Not Supported 00:10:44.310 Virtualization Management: Not Supported 00:10:44.310 Doorbell Buffer Config: Supported 00:10:44.310 Get LBA Status Capability: Not Supported 00:10:44.310 Command & Feature Lockdown Capability: Not Supported 00:10:44.311 Abort Command Limit: 4 00:10:44.311 Async Event Request Limit: 4 00:10:44.311 Number of Firmware Slots: N/A 00:10:44.311 Firmware Slot 1 Read-Only: N/A 00:10:44.311 Firmware Activation Without Reset: N/A 00:10:44.311 Multiple Update Detection Support: N/A 00:10:44.311 Firmware Update Granularity: No Information Provided 00:10:44.311 Per-Namespace SMART Log: Yes 00:10:44.311 Asymmetric Namespace Access Log Page: Not Supported 00:10:44.311 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:44.311 Command Effects Log Page: Supported 00:10:44.311 Get Log Page Extended Data: Supported 00:10:44.311 Telemetry Log Pages: Not Supported 00:10:44.311 Persistent Event Log Pages: Not Supported 00:10:44.311 Supported Log Pages Log Page: May Support 00:10:44.311 Commands Supported & Effects Log Page: Not Supported 00:10:44.311 Feature Identifiers & Effects Log Page:May Support 00:10:44.311 NVMe-MI Commands & Effects Log Page: May Support 00:10:44.311 Data Area 4 for Telemetry Log: Not Supported 00:10:44.311 Error Log Page Entries Supported: 1 00:10:44.311 Keep Alive: Not Supported 00:10:44.311 00:10:44.311 NVM Command Set Attributes 00:10:44.311 ========================== 00:10:44.311 Submission Queue Entry Size 00:10:44.311 Max: 64 00:10:44.311 Min: 64 00:10:44.311 Completion Queue Entry Size 00:10:44.311 Max: 16 00:10:44.311 Min: 16 00:10:44.311 Number of Namespaces: 256 00:10:44.311 Compare Command: Supported 00:10:44.311 Write Uncorrectable Command: Not Supported 00:10:44.311 Dataset Management Command: Supported 00:10:44.311 Write Zeroes Command: Supported 00:10:44.311 Set Features Save Field: Supported 00:10:44.311 Reservations: Not Supported 00:10:44.311 Timestamp: Supported 00:10:44.311 Copy: Supported 00:10:44.311 Volatile Write Cache: Present 00:10:44.311 Atomic Write Unit (Normal): 1 00:10:44.311 Atomic Write Unit (PFail): 1 00:10:44.311 Atomic Compare & Write Unit: 1 00:10:44.311 Fused Compare & Write: Not Supported 00:10:44.311 Scatter-Gather List 00:10:44.311 SGL Command Set: Supported 00:10:44.311 SGL Keyed: Not Supported 00:10:44.311 SGL Bit Bucket Descriptor: Not Supported 00:10:44.311 SGL Metadata Pointer: Not Supported 00:10:44.311 Oversized SGL: Not Supported 00:10:44.311 SGL Metadata Address: Not Supported 00:10:44.311 SGL Offset: Not Supported 00:10:44.311 Transport SGL Data Block: Not Supported 00:10:44.311 Replay Protected Memory Block: Not Supported 00:10:44.311 00:10:44.311 Firmware Slot Information 00:10:44.311 ========================= 00:10:44.311 Active slot: 1 00:10:44.311 Slot 1 Firmware Revision: 1.0 00:10:44.311 00:10:44.311 00:10:44.311 Commands Supported and Effects 00:10:44.311 ============================== 00:10:44.311 Admin Commands 00:10:44.311 -------------- 00:10:44.311 Delete I/O Submission Queue (00h): Supported 00:10:44.311 Create I/O Submission Queue (01h): Supported 00:10:44.311 Get Log Page (02h): Supported 00:10:44.311 Delete I/O Completion Queue (04h): Supported 00:10:44.311 Create I/O Completion Queue (05h): Supported 00:10:44.311 Identify (06h): Supported 00:10:44.311 Abort (08h): Supported 00:10:44.311 Set Features (09h): Supported 00:10:44.311 Get Features (0Ah): Supported 00:10:44.311 Asynchronous Event Request (0Ch): Supported 00:10:44.311 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:44.311 Directive Send (19h): Supported 00:10:44.311 Directive Receive (1Ah): Supported 00:10:44.311 Virtualization Management (1Ch): Supported 00:10:44.311 Doorbell Buffer Config (7Ch): Supported 00:10:44.311 Format NVM (80h): Supported LBA-Change 00:10:44.311 I/O Commands 00:10:44.311 ------------ 00:10:44.311 Flush (00h): Supported LBA-Change 00:10:44.311 Write (01h): Supported LBA-Change 00:10:44.311 Read (02h): Supported 00:10:44.311 Compare (05h): Supported 00:10:44.311 Write Zeroes (08h): Supported LBA-Change 00:10:44.311 Dataset Management (09h): Supported LBA-Change 00:10:44.311 Unknown (0Ch): Supported 00:10:44.311 Unknown (12h): Supported 00:10:44.311 Copy (19h): Supported LBA-Change 00:10:44.311 Unknown (1Dh): Supported LBA-Change 00:10:44.311 00:10:44.311 Error Log 00:10:44.311 ========= 00:10:44.311 00:10:44.311 Arbitration 00:10:44.311 =========== 00:10:44.311 Arbitration Burst: no limit 00:10:44.311 00:10:44.311 Power Management 00:10:44.311 ================ 00:10:44.311 Number of Power States: 1 00:10:44.311 Current Power State: Power State #0 00:10:44.311 Power State #0: 00:10:44.311 Max Power: 25.00 W 00:10:44.311 Non-Operational State: Operational 00:10:44.311 Entry Latency: 16 microseconds 00:10:44.311 Exit Latency: 4 microseconds 00:10:44.311 Relative Read Throughput: 0 00:10:44.311 Relative Read Latency: 0 00:10:44.311 Relative Write Throughput: 0 00:10:44.311 Relative Write Latency: 0 00:10:44.311 Idle Power: Not Reported 00:10:44.311 Active Power: Not Reported 00:10:44.311 Non-Operational Permissive Mode: Not Supported 00:10:44.311 00:10:44.311 Health Information 00:10:44.311 ================== 00:10:44.311 Critical Warnings: 00:10:44.311 Available Spare Space: OK 00:10:44.311 Temperature: OK 00:10:44.311 Device Reliability: OK 00:10:44.311 Read Only: No 00:10:44.311 Volatile Memory Backup: OK 00:10:44.311 Current Temperature: 323 Kelvin (50 Celsius) 00:10:44.311 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:44.311 Available Spare: 0% 00:10:44.311 Available Spare Threshold: 0% 00:10:44.311 Life Percentage Used: 0% 00:10:44.311 Data Units Read: 801 00:10:44.311 Data Units Written: 729 00:10:44.311 Host Read Commands: 39635 00:10:44.311 Host Write Commands: 39421 00:10:44.311 Controller Busy Time: 0 minutes 00:10:44.311 Power Cycles: 0 00:10:44.311 Power On Hours: 0 hours 00:10:44.311 Unsafe Shutdowns: 0 00:10:44.311 Unrecoverable Media Errors: 0 00:10:44.311 Lifetime Error Log Entries: 0 00:10:44.311 Warning Temperature Time: 0 minutes 00:10:44.311 Critical Temperature Time: 0 minutes 00:10:44.311 00:10:44.311 Number of Queues 00:10:44.311 ================ 00:10:44.311 Number of I/O Submission Queues: 64 00:10:44.311 Number of I/O Completion Queues: 64 00:10:44.311 00:10:44.311 ZNS Specific Controller Data 00:10:44.311 ============================ 00:10:44.311 Zone Append Size Limit: 0 00:10:44.311 00:10:44.311 00:10:44.311 Active Namespaces 00:10:44.311 ================= 00:10:44.311 Namespace ID:1 00:10:44.311 Error Recovery Timeout: Unlimited 00:10:44.311 Command Set Identifier: NVM (00h) 00:10:44.311 Deallocate: Supported 00:10:44.311 Deallocated/Unwritten Error: Supported 00:10:44.311 Deallocated Read Value: All 0x00 00:10:44.311 Deallocate in Write Zeroes: Not Supported 00:10:44.311 Deallocated Guard Field: 0xFFFF 00:10:44.311 Flush: Supported 00:10:44.311 Reservation: Not Supported 00:10:44.311 Metadata Transferred as: Separate Metadata Buffer 00:10:44.311 Namespace Sharing Capabilities: Private 00:10:44.311 Size (in LBAs): 1548666 (5GiB) 00:10:44.311 Capacity (in LBAs): 1548666 (5GiB) 00:10:44.311 Utilization (in LBAs): 1548666 (5GiB) 00:10:44.311 Thin Provisioning: Not Supported 00:10:44.311 Per-NS Atomic Units: No 00:10:44.311 Maximum Single Source Range Length: 128 00:10:44.311 Maximum Copy Length: 128 00:10:44.311 Maximum Source Range Count: 128 00:10:44.311 NGUID/EUI64 Never Reused: No 00:10:44.311 Namespace Write Protected: No 00:10:44.311 Number of LBA Formats: 8 00:10:44.311 Current LBA Format: LBA Format #07 00:10:44.311 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:44.311 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:44.311 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:44.311 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:44.311 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:44.311 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:44.311 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:44.311 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:44.311 00:10:44.311 NVM Specific Namespace Data 00:10:44.311 =========================== 00:10:44.311 Logical Block Storage Tag Mask: 0 00:10:44.311 Protection Information Capabilities: 00:10:44.311 16b Guard Protection Information Storage Tag Support: No 00:10:44.311 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:44.311 Storage Tag Check Read Support: No 00:10:44.311 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.311 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.311 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.311 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.311 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.311 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.311 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.311 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.311 21:41:51 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:44.311 21:41:51 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:10:44.572 ===================================================== 00:10:44.572 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:44.572 ===================================================== 00:10:44.572 Controller Capabilities/Features 00:10:44.572 ================================ 00:10:44.572 Vendor ID: 1b36 00:10:44.572 Subsystem Vendor ID: 1af4 00:10:44.572 Serial Number: 12341 00:10:44.572 Model Number: QEMU NVMe Ctrl 00:10:44.572 Firmware Version: 8.0.0 00:10:44.572 Recommended Arb Burst: 6 00:10:44.572 IEEE OUI Identifier: 00 54 52 00:10:44.572 Multi-path I/O 00:10:44.572 May have multiple subsystem ports: No 00:10:44.572 May have multiple controllers: No 00:10:44.572 Associated with SR-IOV VF: No 00:10:44.572 Max Data Transfer Size: 524288 00:10:44.572 Max Number of Namespaces: 256 00:10:44.572 Max Number of I/O Queues: 64 00:10:44.572 NVMe Specification Version (VS): 1.4 00:10:44.572 NVMe Specification Version (Identify): 1.4 00:10:44.572 Maximum Queue Entries: 2048 00:10:44.572 Contiguous Queues Required: Yes 00:10:44.572 Arbitration Mechanisms Supported 00:10:44.572 Weighted Round Robin: Not Supported 00:10:44.572 Vendor Specific: Not Supported 00:10:44.572 Reset Timeout: 7500 ms 00:10:44.572 Doorbell Stride: 4 bytes 00:10:44.572 NVM Subsystem Reset: Not Supported 00:10:44.572 Command Sets Supported 00:10:44.572 NVM Command Set: Supported 00:10:44.572 Boot Partition: Not Supported 00:10:44.572 Memory Page Size Minimum: 4096 bytes 00:10:44.572 Memory Page Size Maximum: 65536 bytes 00:10:44.572 Persistent Memory Region: Not Supported 00:10:44.572 Optional Asynchronous Events Supported 00:10:44.572 Namespace Attribute Notices: Supported 00:10:44.572 Firmware Activation Notices: Not Supported 00:10:44.572 ANA Change Notices: Not Supported 00:10:44.572 PLE Aggregate Log Change Notices: Not Supported 00:10:44.572 LBA Status Info Alert Notices: Not Supported 00:10:44.572 EGE Aggregate Log Change Notices: Not Supported 00:10:44.572 Normal NVM Subsystem Shutdown event: Not Supported 00:10:44.572 Zone Descriptor Change Notices: Not Supported 00:10:44.572 Discovery Log Change Notices: Not Supported 00:10:44.572 Controller Attributes 00:10:44.572 128-bit Host Identifier: Not Supported 00:10:44.572 Non-Operational Permissive Mode: Not Supported 00:10:44.572 NVM Sets: Not Supported 00:10:44.572 Read Recovery Levels: Not Supported 00:10:44.572 Endurance Groups: Not Supported 00:10:44.572 Predictable Latency Mode: Not Supported 00:10:44.572 Traffic Based Keep ALive: Not Supported 00:10:44.572 Namespace Granularity: Not Supported 00:10:44.572 SQ Associations: Not Supported 00:10:44.572 UUID List: Not Supported 00:10:44.572 Multi-Domain Subsystem: Not Supported 00:10:44.572 Fixed Capacity Management: Not Supported 00:10:44.572 Variable Capacity Management: Not Supported 00:10:44.572 Delete Endurance Group: Not Supported 00:10:44.572 Delete NVM Set: Not Supported 00:10:44.572 Extended LBA Formats Supported: Supported 00:10:44.572 Flexible Data Placement Supported: Not Supported 00:10:44.572 00:10:44.572 Controller Memory Buffer Support 00:10:44.572 ================================ 00:10:44.572 Supported: No 00:10:44.572 00:10:44.572 Persistent Memory Region Support 00:10:44.572 ================================ 00:10:44.572 Supported: No 00:10:44.572 00:10:44.572 Admin Command Set Attributes 00:10:44.572 ============================ 00:10:44.572 Security Send/Receive: Not Supported 00:10:44.572 Format NVM: Supported 00:10:44.572 Firmware Activate/Download: Not Supported 00:10:44.572 Namespace Management: Supported 00:10:44.572 Device Self-Test: Not Supported 00:10:44.572 Directives: Supported 00:10:44.572 NVMe-MI: Not Supported 00:10:44.572 Virtualization Management: Not Supported 00:10:44.572 Doorbell Buffer Config: Supported 00:10:44.572 Get LBA Status Capability: Not Supported 00:10:44.572 Command & Feature Lockdown Capability: Not Supported 00:10:44.572 Abort Command Limit: 4 00:10:44.572 Async Event Request Limit: 4 00:10:44.572 Number of Firmware Slots: N/A 00:10:44.572 Firmware Slot 1 Read-Only: N/A 00:10:44.572 Firmware Activation Without Reset: N/A 00:10:44.572 Multiple Update Detection Support: N/A 00:10:44.572 Firmware Update Granularity: No Information Provided 00:10:44.572 Per-Namespace SMART Log: Yes 00:10:44.572 Asymmetric Namespace Access Log Page: Not Supported 00:10:44.572 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:44.572 Command Effects Log Page: Supported 00:10:44.572 Get Log Page Extended Data: Supported 00:10:44.572 Telemetry Log Pages: Not Supported 00:10:44.572 Persistent Event Log Pages: Not Supported 00:10:44.572 Supported Log Pages Log Page: May Support 00:10:44.572 Commands Supported & Effects Log Page: Not Supported 00:10:44.572 Feature Identifiers & Effects Log Page:May Support 00:10:44.572 NVMe-MI Commands & Effects Log Page: May Support 00:10:44.572 Data Area 4 for Telemetry Log: Not Supported 00:10:44.572 Error Log Page Entries Supported: 1 00:10:44.572 Keep Alive: Not Supported 00:10:44.572 00:10:44.572 NVM Command Set Attributes 00:10:44.572 ========================== 00:10:44.572 Submission Queue Entry Size 00:10:44.572 Max: 64 00:10:44.572 Min: 64 00:10:44.572 Completion Queue Entry Size 00:10:44.572 Max: 16 00:10:44.572 Min: 16 00:10:44.572 Number of Namespaces: 256 00:10:44.572 Compare Command: Supported 00:10:44.572 Write Uncorrectable Command: Not Supported 00:10:44.572 Dataset Management Command: Supported 00:10:44.572 Write Zeroes Command: Supported 00:10:44.572 Set Features Save Field: Supported 00:10:44.572 Reservations: Not Supported 00:10:44.572 Timestamp: Supported 00:10:44.572 Copy: Supported 00:10:44.572 Volatile Write Cache: Present 00:10:44.572 Atomic Write Unit (Normal): 1 00:10:44.572 Atomic Write Unit (PFail): 1 00:10:44.572 Atomic Compare & Write Unit: 1 00:10:44.572 Fused Compare & Write: Not Supported 00:10:44.572 Scatter-Gather List 00:10:44.572 SGL Command Set: Supported 00:10:44.572 SGL Keyed: Not Supported 00:10:44.572 SGL Bit Bucket Descriptor: Not Supported 00:10:44.572 SGL Metadata Pointer: Not Supported 00:10:44.572 Oversized SGL: Not Supported 00:10:44.572 SGL Metadata Address: Not Supported 00:10:44.572 SGL Offset: Not Supported 00:10:44.572 Transport SGL Data Block: Not Supported 00:10:44.572 Replay Protected Memory Block: Not Supported 00:10:44.572 00:10:44.572 Firmware Slot Information 00:10:44.572 ========================= 00:10:44.572 Active slot: 1 00:10:44.572 Slot 1 Firmware Revision: 1.0 00:10:44.572 00:10:44.572 00:10:44.572 Commands Supported and Effects 00:10:44.572 ============================== 00:10:44.572 Admin Commands 00:10:44.572 -------------- 00:10:44.572 Delete I/O Submission Queue (00h): Supported 00:10:44.572 Create I/O Submission Queue (01h): Supported 00:10:44.572 Get Log Page (02h): Supported 00:10:44.572 Delete I/O Completion Queue (04h): Supported 00:10:44.572 Create I/O Completion Queue (05h): Supported 00:10:44.572 Identify (06h): Supported 00:10:44.572 Abort (08h): Supported 00:10:44.572 Set Features (09h): Supported 00:10:44.573 Get Features (0Ah): Supported 00:10:44.573 Asynchronous Event Request (0Ch): Supported 00:10:44.573 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:44.573 Directive Send (19h): Supported 00:10:44.573 Directive Receive (1Ah): Supported 00:10:44.573 Virtualization Management (1Ch): Supported 00:10:44.573 Doorbell Buffer Config (7Ch): Supported 00:10:44.573 Format NVM (80h): Supported LBA-Change 00:10:44.573 I/O Commands 00:10:44.573 ------------ 00:10:44.573 Flush (00h): Supported LBA-Change 00:10:44.573 Write (01h): Supported LBA-Change 00:10:44.573 Read (02h): Supported 00:10:44.573 Compare (05h): Supported 00:10:44.573 Write Zeroes (08h): Supported LBA-Change 00:10:44.573 Dataset Management (09h): Supported LBA-Change 00:10:44.573 Unknown (0Ch): Supported 00:10:44.573 Unknown (12h): Supported 00:10:44.573 Copy (19h): Supported LBA-Change 00:10:44.573 Unknown (1Dh): Supported LBA-Change 00:10:44.573 00:10:44.573 Error Log 00:10:44.573 ========= 00:10:44.573 00:10:44.573 Arbitration 00:10:44.573 =========== 00:10:44.573 Arbitration Burst: no limit 00:10:44.573 00:10:44.573 Power Management 00:10:44.573 ================ 00:10:44.573 Number of Power States: 1 00:10:44.573 Current Power State: Power State #0 00:10:44.573 Power State #0: 00:10:44.573 Max Power: 25.00 W 00:10:44.573 Non-Operational State: Operational 00:10:44.573 Entry Latency: 16 microseconds 00:10:44.573 Exit Latency: 4 microseconds 00:10:44.573 Relative Read Throughput: 0 00:10:44.573 Relative Read Latency: 0 00:10:44.573 Relative Write Throughput: 0 00:10:44.573 Relative Write Latency: 0 00:10:44.573 Idle Power: Not Reported 00:10:44.573 Active Power: Not Reported 00:10:44.573 Non-Operational Permissive Mode: Not Supported 00:10:44.573 00:10:44.573 Health Information 00:10:44.573 ================== 00:10:44.573 Critical Warnings: 00:10:44.573 Available Spare Space: OK 00:10:44.573 Temperature: OK 00:10:44.573 Device Reliability: OK 00:10:44.573 Read Only: No 00:10:44.573 Volatile Memory Backup: OK 00:10:44.573 Current Temperature: 323 Kelvin (50 Celsius) 00:10:44.573 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:44.573 Available Spare: 0% 00:10:44.573 Available Spare Threshold: 0% 00:10:44.573 Life Percentage Used: 0% 00:10:44.573 Data Units Read: 1142 00:10:44.573 Data Units Written: 1008 00:10:44.573 Host Read Commands: 58126 00:10:44.573 Host Write Commands: 56907 00:10:44.573 Controller Busy Time: 0 minutes 00:10:44.573 Power Cycles: 0 00:10:44.573 Power On Hours: 0 hours 00:10:44.573 Unsafe Shutdowns: 0 00:10:44.573 Unrecoverable Media Errors: 0 00:10:44.573 Lifetime Error Log Entries: 0 00:10:44.573 Warning Temperature Time: 0 minutes 00:10:44.573 Critical Temperature Time: 0 minutes 00:10:44.573 00:10:44.573 Number of Queues 00:10:44.573 ================ 00:10:44.573 Number of I/O Submission Queues: 64 00:10:44.573 Number of I/O Completion Queues: 64 00:10:44.573 00:10:44.573 ZNS Specific Controller Data 00:10:44.573 ============================ 00:10:44.573 Zone Append Size Limit: 0 00:10:44.573 00:10:44.573 00:10:44.573 Active Namespaces 00:10:44.573 ================= 00:10:44.573 Namespace ID:1 00:10:44.573 Error Recovery Timeout: Unlimited 00:10:44.573 Command Set Identifier: NVM (00h) 00:10:44.573 Deallocate: Supported 00:10:44.573 Deallocated/Unwritten Error: Supported 00:10:44.573 Deallocated Read Value: All 0x00 00:10:44.573 Deallocate in Write Zeroes: Not Supported 00:10:44.573 Deallocated Guard Field: 0xFFFF 00:10:44.573 Flush: Supported 00:10:44.573 Reservation: Not Supported 00:10:44.573 Namespace Sharing Capabilities: Private 00:10:44.573 Size (in LBAs): 1310720 (5GiB) 00:10:44.573 Capacity (in LBAs): 1310720 (5GiB) 00:10:44.573 Utilization (in LBAs): 1310720 (5GiB) 00:10:44.573 Thin Provisioning: Not Supported 00:10:44.573 Per-NS Atomic Units: No 00:10:44.573 Maximum Single Source Range Length: 128 00:10:44.573 Maximum Copy Length: 128 00:10:44.573 Maximum Source Range Count: 128 00:10:44.573 NGUID/EUI64 Never Reused: No 00:10:44.573 Namespace Write Protected: No 00:10:44.573 Number of LBA Formats: 8 00:10:44.573 Current LBA Format: LBA Format #04 00:10:44.573 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:44.573 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:44.573 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:44.573 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:44.573 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:44.573 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:44.573 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:44.573 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:44.573 00:10:44.573 NVM Specific Namespace Data 00:10:44.573 =========================== 00:10:44.573 Logical Block Storage Tag Mask: 0 00:10:44.573 Protection Information Capabilities: 00:10:44.573 16b Guard Protection Information Storage Tag Support: No 00:10:44.573 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:44.573 Storage Tag Check Read Support: No 00:10:44.573 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.573 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.573 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.573 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.573 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.573 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.573 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.573 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.573 21:41:52 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:44.573 21:41:52 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:10:44.833 ===================================================== 00:10:44.833 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:44.833 ===================================================== 00:10:44.833 Controller Capabilities/Features 00:10:44.833 ================================ 00:10:44.833 Vendor ID: 1b36 00:10:44.833 Subsystem Vendor ID: 1af4 00:10:44.833 Serial Number: 12342 00:10:44.833 Model Number: QEMU NVMe Ctrl 00:10:44.833 Firmware Version: 8.0.0 00:10:44.833 Recommended Arb Burst: 6 00:10:44.833 IEEE OUI Identifier: 00 54 52 00:10:44.833 Multi-path I/O 00:10:44.833 May have multiple subsystem ports: No 00:10:44.833 May have multiple controllers: No 00:10:44.833 Associated with SR-IOV VF: No 00:10:44.833 Max Data Transfer Size: 524288 00:10:44.834 Max Number of Namespaces: 256 00:10:44.834 Max Number of I/O Queues: 64 00:10:44.834 NVMe Specification Version (VS): 1.4 00:10:44.834 NVMe Specification Version (Identify): 1.4 00:10:44.834 Maximum Queue Entries: 2048 00:10:44.834 Contiguous Queues Required: Yes 00:10:44.834 Arbitration Mechanisms Supported 00:10:44.834 Weighted Round Robin: Not Supported 00:10:44.834 Vendor Specific: Not Supported 00:10:44.834 Reset Timeout: 7500 ms 00:10:44.834 Doorbell Stride: 4 bytes 00:10:44.834 NVM Subsystem Reset: Not Supported 00:10:44.834 Command Sets Supported 00:10:44.834 NVM Command Set: Supported 00:10:44.834 Boot Partition: Not Supported 00:10:44.834 Memory Page Size Minimum: 4096 bytes 00:10:44.834 Memory Page Size Maximum: 65536 bytes 00:10:44.834 Persistent Memory Region: Not Supported 00:10:44.834 Optional Asynchronous Events Supported 00:10:44.834 Namespace Attribute Notices: Supported 00:10:44.834 Firmware Activation Notices: Not Supported 00:10:44.834 ANA Change Notices: Not Supported 00:10:44.834 PLE Aggregate Log Change Notices: Not Supported 00:10:44.834 LBA Status Info Alert Notices: Not Supported 00:10:44.834 EGE Aggregate Log Change Notices: Not Supported 00:10:44.834 Normal NVM Subsystem Shutdown event: Not Supported 00:10:44.834 Zone Descriptor Change Notices: Not Supported 00:10:44.834 Discovery Log Change Notices: Not Supported 00:10:44.834 Controller Attributes 00:10:44.834 128-bit Host Identifier: Not Supported 00:10:44.834 Non-Operational Permissive Mode: Not Supported 00:10:44.834 NVM Sets: Not Supported 00:10:44.834 Read Recovery Levels: Not Supported 00:10:44.834 Endurance Groups: Not Supported 00:10:44.834 Predictable Latency Mode: Not Supported 00:10:44.834 Traffic Based Keep ALive: Not Supported 00:10:44.834 Namespace Granularity: Not Supported 00:10:44.834 SQ Associations: Not Supported 00:10:44.834 UUID List: Not Supported 00:10:44.834 Multi-Domain Subsystem: Not Supported 00:10:44.834 Fixed Capacity Management: Not Supported 00:10:44.834 Variable Capacity Management: Not Supported 00:10:44.834 Delete Endurance Group: Not Supported 00:10:44.834 Delete NVM Set: Not Supported 00:10:44.834 Extended LBA Formats Supported: Supported 00:10:44.834 Flexible Data Placement Supported: Not Supported 00:10:44.834 00:10:44.834 Controller Memory Buffer Support 00:10:44.834 ================================ 00:10:44.834 Supported: No 00:10:44.834 00:10:44.834 Persistent Memory Region Support 00:10:44.834 ================================ 00:10:44.834 Supported: No 00:10:44.834 00:10:44.834 Admin Command Set Attributes 00:10:44.834 ============================ 00:10:44.834 Security Send/Receive: Not Supported 00:10:44.834 Format NVM: Supported 00:10:44.834 Firmware Activate/Download: Not Supported 00:10:44.834 Namespace Management: Supported 00:10:44.834 Device Self-Test: Not Supported 00:10:44.834 Directives: Supported 00:10:44.834 NVMe-MI: Not Supported 00:10:44.834 Virtualization Management: Not Supported 00:10:44.834 Doorbell Buffer Config: Supported 00:10:44.834 Get LBA Status Capability: Not Supported 00:10:44.834 Command & Feature Lockdown Capability: Not Supported 00:10:44.834 Abort Command Limit: 4 00:10:44.834 Async Event Request Limit: 4 00:10:44.834 Number of Firmware Slots: N/A 00:10:44.834 Firmware Slot 1 Read-Only: N/A 00:10:44.834 Firmware Activation Without Reset: N/A 00:10:44.834 Multiple Update Detection Support: N/A 00:10:44.834 Firmware Update Granularity: No Information Provided 00:10:44.834 Per-Namespace SMART Log: Yes 00:10:44.834 Asymmetric Namespace Access Log Page: Not Supported 00:10:44.834 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:44.834 Command Effects Log Page: Supported 00:10:44.834 Get Log Page Extended Data: Supported 00:10:44.834 Telemetry Log Pages: Not Supported 00:10:44.834 Persistent Event Log Pages: Not Supported 00:10:44.834 Supported Log Pages Log Page: May Support 00:10:44.834 Commands Supported & Effects Log Page: Not Supported 00:10:44.834 Feature Identifiers & Effects Log Page:May Support 00:10:44.834 NVMe-MI Commands & Effects Log Page: May Support 00:10:44.834 Data Area 4 for Telemetry Log: Not Supported 00:10:44.834 Error Log Page Entries Supported: 1 00:10:44.834 Keep Alive: Not Supported 00:10:44.834 00:10:44.834 NVM Command Set Attributes 00:10:44.834 ========================== 00:10:44.834 Submission Queue Entry Size 00:10:44.834 Max: 64 00:10:44.834 Min: 64 00:10:44.834 Completion Queue Entry Size 00:10:44.834 Max: 16 00:10:44.834 Min: 16 00:10:44.834 Number of Namespaces: 256 00:10:44.834 Compare Command: Supported 00:10:44.834 Write Uncorrectable Command: Not Supported 00:10:44.834 Dataset Management Command: Supported 00:10:44.834 Write Zeroes Command: Supported 00:10:44.834 Set Features Save Field: Supported 00:10:44.834 Reservations: Not Supported 00:10:44.834 Timestamp: Supported 00:10:44.834 Copy: Supported 00:10:44.834 Volatile Write Cache: Present 00:10:44.834 Atomic Write Unit (Normal): 1 00:10:44.834 Atomic Write Unit (PFail): 1 00:10:44.834 Atomic Compare & Write Unit: 1 00:10:44.834 Fused Compare & Write: Not Supported 00:10:44.834 Scatter-Gather List 00:10:44.834 SGL Command Set: Supported 00:10:44.834 SGL Keyed: Not Supported 00:10:44.834 SGL Bit Bucket Descriptor: Not Supported 00:10:44.834 SGL Metadata Pointer: Not Supported 00:10:44.834 Oversized SGL: Not Supported 00:10:44.834 SGL Metadata Address: Not Supported 00:10:44.834 SGL Offset: Not Supported 00:10:44.834 Transport SGL Data Block: Not Supported 00:10:44.834 Replay Protected Memory Block: Not Supported 00:10:44.834 00:10:44.834 Firmware Slot Information 00:10:44.834 ========================= 00:10:44.834 Active slot: 1 00:10:44.834 Slot 1 Firmware Revision: 1.0 00:10:44.834 00:10:44.834 00:10:44.834 Commands Supported and Effects 00:10:44.834 ============================== 00:10:44.834 Admin Commands 00:10:44.834 -------------- 00:10:44.834 Delete I/O Submission Queue (00h): Supported 00:10:44.834 Create I/O Submission Queue (01h): Supported 00:10:44.834 Get Log Page (02h): Supported 00:10:44.834 Delete I/O Completion Queue (04h): Supported 00:10:44.834 Create I/O Completion Queue (05h): Supported 00:10:44.834 Identify (06h): Supported 00:10:44.834 Abort (08h): Supported 00:10:44.834 Set Features (09h): Supported 00:10:44.834 Get Features (0Ah): Supported 00:10:44.834 Asynchronous Event Request (0Ch): Supported 00:10:44.834 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:44.834 Directive Send (19h): Supported 00:10:44.834 Directive Receive (1Ah): Supported 00:10:44.834 Virtualization Management (1Ch): Supported 00:10:44.834 Doorbell Buffer Config (7Ch): Supported 00:10:44.834 Format NVM (80h): Supported LBA-Change 00:10:44.834 I/O Commands 00:10:44.834 ------------ 00:10:44.835 Flush (00h): Supported LBA-Change 00:10:44.835 Write (01h): Supported LBA-Change 00:10:44.835 Read (02h): Supported 00:10:44.835 Compare (05h): Supported 00:10:44.835 Write Zeroes (08h): Supported LBA-Change 00:10:44.835 Dataset Management (09h): Supported LBA-Change 00:10:44.835 Unknown (0Ch): Supported 00:10:44.835 Unknown (12h): Supported 00:10:44.835 Copy (19h): Supported LBA-Change 00:10:44.835 Unknown (1Dh): Supported LBA-Change 00:10:44.835 00:10:44.835 Error Log 00:10:44.835 ========= 00:10:44.835 00:10:44.835 Arbitration 00:10:44.835 =========== 00:10:44.835 Arbitration Burst: no limit 00:10:44.835 00:10:44.835 Power Management 00:10:44.835 ================ 00:10:44.835 Number of Power States: 1 00:10:44.835 Current Power State: Power State #0 00:10:44.835 Power State #0: 00:10:44.835 Max Power: 25.00 W 00:10:44.835 Non-Operational State: Operational 00:10:44.835 Entry Latency: 16 microseconds 00:10:44.835 Exit Latency: 4 microseconds 00:10:44.835 Relative Read Throughput: 0 00:10:44.835 Relative Read Latency: 0 00:10:44.835 Relative Write Throughput: 0 00:10:44.835 Relative Write Latency: 0 00:10:44.835 Idle Power: Not Reported 00:10:44.835 Active Power: Not Reported 00:10:44.835 Non-Operational Permissive Mode: Not Supported 00:10:44.835 00:10:44.835 Health Information 00:10:44.835 ================== 00:10:44.835 Critical Warnings: 00:10:44.835 Available Spare Space: OK 00:10:44.835 Temperature: OK 00:10:44.835 Device Reliability: OK 00:10:44.835 Read Only: No 00:10:44.835 Volatile Memory Backup: OK 00:10:44.835 Current Temperature: 323 Kelvin (50 Celsius) 00:10:44.835 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:44.835 Available Spare: 0% 00:10:44.835 Available Spare Threshold: 0% 00:10:44.835 Life Percentage Used: 0% 00:10:44.835 Data Units Read: 2530 00:10:44.835 Data Units Written: 2317 00:10:44.835 Host Read Commands: 121043 00:10:44.835 Host Write Commands: 119312 00:10:44.835 Controller Busy Time: 0 minutes 00:10:44.835 Power Cycles: 0 00:10:44.835 Power On Hours: 0 hours 00:10:44.835 Unsafe Shutdowns: 0 00:10:44.835 Unrecoverable Media Errors: 0 00:10:44.835 Lifetime Error Log Entries: 0 00:10:44.835 Warning Temperature Time: 0 minutes 00:10:44.835 Critical Temperature Time: 0 minutes 00:10:44.835 00:10:44.835 Number of Queues 00:10:44.835 ================ 00:10:44.835 Number of I/O Submission Queues: 64 00:10:44.835 Number of I/O Completion Queues: 64 00:10:44.835 00:10:44.835 ZNS Specific Controller Data 00:10:44.835 ============================ 00:10:44.835 Zone Append Size Limit: 0 00:10:44.835 00:10:44.835 00:10:44.835 Active Namespaces 00:10:44.835 ================= 00:10:44.835 Namespace ID:1 00:10:44.835 Error Recovery Timeout: Unlimited 00:10:44.835 Command Set Identifier: NVM (00h) 00:10:44.835 Deallocate: Supported 00:10:44.835 Deallocated/Unwritten Error: Supported 00:10:44.835 Deallocated Read Value: All 0x00 00:10:44.835 Deallocate in Write Zeroes: Not Supported 00:10:44.835 Deallocated Guard Field: 0xFFFF 00:10:44.835 Flush: Supported 00:10:44.835 Reservation: Not Supported 00:10:44.835 Namespace Sharing Capabilities: Private 00:10:44.835 Size (in LBAs): 1048576 (4GiB) 00:10:44.835 Capacity (in LBAs): 1048576 (4GiB) 00:10:44.835 Utilization (in LBAs): 1048576 (4GiB) 00:10:44.835 Thin Provisioning: Not Supported 00:10:44.835 Per-NS Atomic Units: No 00:10:44.835 Maximum Single Source Range Length: 128 00:10:44.835 Maximum Copy Length: 128 00:10:44.835 Maximum Source Range Count: 128 00:10:44.835 NGUID/EUI64 Never Reused: No 00:10:44.835 Namespace Write Protected: No 00:10:44.835 Number of LBA Formats: 8 00:10:44.835 Current LBA Format: LBA Format #04 00:10:44.835 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:44.835 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:44.835 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:44.835 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:44.835 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:44.835 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:44.835 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:44.835 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:44.835 00:10:44.835 NVM Specific Namespace Data 00:10:44.835 =========================== 00:10:44.835 Logical Block Storage Tag Mask: 0 00:10:44.835 Protection Information Capabilities: 00:10:44.835 16b Guard Protection Information Storage Tag Support: No 00:10:44.835 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:44.835 Storage Tag Check Read Support: No 00:10:44.835 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.835 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.835 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.835 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.835 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.835 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.835 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.835 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.835 Namespace ID:2 00:10:44.835 Error Recovery Timeout: Unlimited 00:10:44.835 Command Set Identifier: NVM (00h) 00:10:44.835 Deallocate: Supported 00:10:44.835 Deallocated/Unwritten Error: Supported 00:10:44.835 Deallocated Read Value: All 0x00 00:10:44.835 Deallocate in Write Zeroes: Not Supported 00:10:44.835 Deallocated Guard Field: 0xFFFF 00:10:44.835 Flush: Supported 00:10:44.835 Reservation: Not Supported 00:10:44.835 Namespace Sharing Capabilities: Private 00:10:44.835 Size (in LBAs): 1048576 (4GiB) 00:10:44.835 Capacity (in LBAs): 1048576 (4GiB) 00:10:44.835 Utilization (in LBAs): 1048576 (4GiB) 00:10:44.835 Thin Provisioning: Not Supported 00:10:44.835 Per-NS Atomic Units: No 00:10:44.835 Maximum Single Source Range Length: 128 00:10:44.835 Maximum Copy Length: 128 00:10:44.835 Maximum Source Range Count: 128 00:10:44.835 NGUID/EUI64 Never Reused: No 00:10:44.835 Namespace Write Protected: No 00:10:44.835 Number of LBA Formats: 8 00:10:44.835 Current LBA Format: LBA Format #04 00:10:44.835 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:44.835 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:44.835 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:44.835 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:44.835 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:44.835 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:44.835 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:44.835 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:44.835 00:10:44.835 NVM Specific Namespace Data 00:10:44.835 =========================== 00:10:44.835 Logical Block Storage Tag Mask: 0 00:10:44.835 Protection Information Capabilities: 00:10:44.835 16b Guard Protection Information Storage Tag Support: No 00:10:44.835 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:44.835 Storage Tag Check Read Support: No 00:10:44.835 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.835 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.835 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.835 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.835 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.835 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.835 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.835 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.835 Namespace ID:3 00:10:44.835 Error Recovery Timeout: Unlimited 00:10:44.835 Command Set Identifier: NVM (00h) 00:10:44.835 Deallocate: Supported 00:10:44.835 Deallocated/Unwritten Error: Supported 00:10:44.835 Deallocated Read Value: All 0x00 00:10:44.835 Deallocate in Write Zeroes: Not Supported 00:10:44.835 Deallocated Guard Field: 0xFFFF 00:10:44.835 Flush: Supported 00:10:44.835 Reservation: Not Supported 00:10:44.835 Namespace Sharing Capabilities: Private 00:10:44.835 Size (in LBAs): 1048576 (4GiB) 00:10:44.835 Capacity (in LBAs): 1048576 (4GiB) 00:10:44.835 Utilization (in LBAs): 1048576 (4GiB) 00:10:44.835 Thin Provisioning: Not Supported 00:10:44.835 Per-NS Atomic Units: No 00:10:44.835 Maximum Single Source Range Length: 128 00:10:44.835 Maximum Copy Length: 128 00:10:44.835 Maximum Source Range Count: 128 00:10:44.835 NGUID/EUI64 Never Reused: No 00:10:44.835 Namespace Write Protected: No 00:10:44.835 Number of LBA Formats: 8 00:10:44.835 Current LBA Format: LBA Format #04 00:10:44.835 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:44.835 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:44.835 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:44.835 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:44.835 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:44.835 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:44.836 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:44.836 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:44.836 00:10:44.836 NVM Specific Namespace Data 00:10:44.836 =========================== 00:10:44.836 Logical Block Storage Tag Mask: 0 00:10:44.836 Protection Information Capabilities: 00:10:44.836 16b Guard Protection Information Storage Tag Support: No 00:10:44.836 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:44.836 Storage Tag Check Read Support: No 00:10:44.836 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.836 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.836 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.836 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.836 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.836 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.836 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.836 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:44.836 21:41:52 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:44.836 21:41:52 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:10:45.094 ===================================================== 00:10:45.094 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:45.094 ===================================================== 00:10:45.094 Controller Capabilities/Features 00:10:45.094 ================================ 00:10:45.094 Vendor ID: 1b36 00:10:45.094 Subsystem Vendor ID: 1af4 00:10:45.094 Serial Number: 12343 00:10:45.094 Model Number: QEMU NVMe Ctrl 00:10:45.094 Firmware Version: 8.0.0 00:10:45.094 Recommended Arb Burst: 6 00:10:45.094 IEEE OUI Identifier: 00 54 52 00:10:45.094 Multi-path I/O 00:10:45.094 May have multiple subsystem ports: No 00:10:45.094 May have multiple controllers: Yes 00:10:45.094 Associated with SR-IOV VF: No 00:10:45.094 Max Data Transfer Size: 524288 00:10:45.094 Max Number of Namespaces: 256 00:10:45.094 Max Number of I/O Queues: 64 00:10:45.094 NVMe Specification Version (VS): 1.4 00:10:45.094 NVMe Specification Version (Identify): 1.4 00:10:45.094 Maximum Queue Entries: 2048 00:10:45.094 Contiguous Queues Required: Yes 00:10:45.094 Arbitration Mechanisms Supported 00:10:45.094 Weighted Round Robin: Not Supported 00:10:45.094 Vendor Specific: Not Supported 00:10:45.094 Reset Timeout: 7500 ms 00:10:45.094 Doorbell Stride: 4 bytes 00:10:45.094 NVM Subsystem Reset: Not Supported 00:10:45.094 Command Sets Supported 00:10:45.094 NVM Command Set: Supported 00:10:45.094 Boot Partition: Not Supported 00:10:45.094 Memory Page Size Minimum: 4096 bytes 00:10:45.094 Memory Page Size Maximum: 65536 bytes 00:10:45.094 Persistent Memory Region: Not Supported 00:10:45.094 Optional Asynchronous Events Supported 00:10:45.094 Namespace Attribute Notices: Supported 00:10:45.094 Firmware Activation Notices: Not Supported 00:10:45.094 ANA Change Notices: Not Supported 00:10:45.094 PLE Aggregate Log Change Notices: Not Supported 00:10:45.094 LBA Status Info Alert Notices: Not Supported 00:10:45.094 EGE Aggregate Log Change Notices: Not Supported 00:10:45.094 Normal NVM Subsystem Shutdown event: Not Supported 00:10:45.094 Zone Descriptor Change Notices: Not Supported 00:10:45.094 Discovery Log Change Notices: Not Supported 00:10:45.094 Controller Attributes 00:10:45.094 128-bit Host Identifier: Not Supported 00:10:45.094 Non-Operational Permissive Mode: Not Supported 00:10:45.094 NVM Sets: Not Supported 00:10:45.094 Read Recovery Levels: Not Supported 00:10:45.094 Endurance Groups: Supported 00:10:45.094 Predictable Latency Mode: Not Supported 00:10:45.094 Traffic Based Keep ALive: Not Supported 00:10:45.094 Namespace Granularity: Not Supported 00:10:45.094 SQ Associations: Not Supported 00:10:45.094 UUID List: Not Supported 00:10:45.094 Multi-Domain Subsystem: Not Supported 00:10:45.094 Fixed Capacity Management: Not Supported 00:10:45.094 Variable Capacity Management: Not Supported 00:10:45.094 Delete Endurance Group: Not Supported 00:10:45.094 Delete NVM Set: Not Supported 00:10:45.094 Extended LBA Formats Supported: Supported 00:10:45.094 Flexible Data Placement Supported: Supported 00:10:45.094 00:10:45.094 Controller Memory Buffer Support 00:10:45.094 ================================ 00:10:45.094 Supported: No 00:10:45.094 00:10:45.094 Persistent Memory Region Support 00:10:45.094 ================================ 00:10:45.094 Supported: No 00:10:45.094 00:10:45.094 Admin Command Set Attributes 00:10:45.094 ============================ 00:10:45.094 Security Send/Receive: Not Supported 00:10:45.094 Format NVM: Supported 00:10:45.094 Firmware Activate/Download: Not Supported 00:10:45.094 Namespace Management: Supported 00:10:45.094 Device Self-Test: Not Supported 00:10:45.094 Directives: Supported 00:10:45.094 NVMe-MI: Not Supported 00:10:45.094 Virtualization Management: Not Supported 00:10:45.094 Doorbell Buffer Config: Supported 00:10:45.094 Get LBA Status Capability: Not Supported 00:10:45.095 Command & Feature Lockdown Capability: Not Supported 00:10:45.095 Abort Command Limit: 4 00:10:45.095 Async Event Request Limit: 4 00:10:45.095 Number of Firmware Slots: N/A 00:10:45.095 Firmware Slot 1 Read-Only: N/A 00:10:45.095 Firmware Activation Without Reset: N/A 00:10:45.095 Multiple Update Detection Support: N/A 00:10:45.095 Firmware Update Granularity: No Information Provided 00:10:45.095 Per-Namespace SMART Log: Yes 00:10:45.095 Asymmetric Namespace Access Log Page: Not Supported 00:10:45.095 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:45.095 Command Effects Log Page: Supported 00:10:45.095 Get Log Page Extended Data: Supported 00:10:45.095 Telemetry Log Pages: Not Supported 00:10:45.095 Persistent Event Log Pages: Not Supported 00:10:45.095 Supported Log Pages Log Page: May Support 00:10:45.095 Commands Supported & Effects Log Page: Not Supported 00:10:45.095 Feature Identifiers & Effects Log Page:May Support 00:10:45.095 NVMe-MI Commands & Effects Log Page: May Support 00:10:45.095 Data Area 4 for Telemetry Log: Not Supported 00:10:45.095 Error Log Page Entries Supported: 1 00:10:45.095 Keep Alive: Not Supported 00:10:45.095 00:10:45.095 NVM Command Set Attributes 00:10:45.095 ========================== 00:10:45.095 Submission Queue Entry Size 00:10:45.095 Max: 64 00:10:45.095 Min: 64 00:10:45.095 Completion Queue Entry Size 00:10:45.095 Max: 16 00:10:45.095 Min: 16 00:10:45.095 Number of Namespaces: 256 00:10:45.095 Compare Command: Supported 00:10:45.095 Write Uncorrectable Command: Not Supported 00:10:45.095 Dataset Management Command: Supported 00:10:45.095 Write Zeroes Command: Supported 00:10:45.095 Set Features Save Field: Supported 00:10:45.095 Reservations: Not Supported 00:10:45.095 Timestamp: Supported 00:10:45.095 Copy: Supported 00:10:45.095 Volatile Write Cache: Present 00:10:45.095 Atomic Write Unit (Normal): 1 00:10:45.095 Atomic Write Unit (PFail): 1 00:10:45.095 Atomic Compare & Write Unit: 1 00:10:45.095 Fused Compare & Write: Not Supported 00:10:45.095 Scatter-Gather List 00:10:45.095 SGL Command Set: Supported 00:10:45.095 SGL Keyed: Not Supported 00:10:45.095 SGL Bit Bucket Descriptor: Not Supported 00:10:45.095 SGL Metadata Pointer: Not Supported 00:10:45.095 Oversized SGL: Not Supported 00:10:45.095 SGL Metadata Address: Not Supported 00:10:45.095 SGL Offset: Not Supported 00:10:45.095 Transport SGL Data Block: Not Supported 00:10:45.095 Replay Protected Memory Block: Not Supported 00:10:45.095 00:10:45.095 Firmware Slot Information 00:10:45.095 ========================= 00:10:45.095 Active slot: 1 00:10:45.095 Slot 1 Firmware Revision: 1.0 00:10:45.095 00:10:45.095 00:10:45.095 Commands Supported and Effects 00:10:45.095 ============================== 00:10:45.095 Admin Commands 00:10:45.095 -------------- 00:10:45.095 Delete I/O Submission Queue (00h): Supported 00:10:45.095 Create I/O Submission Queue (01h): Supported 00:10:45.095 Get Log Page (02h): Supported 00:10:45.095 Delete I/O Completion Queue (04h): Supported 00:10:45.095 Create I/O Completion Queue (05h): Supported 00:10:45.095 Identify (06h): Supported 00:10:45.095 Abort (08h): Supported 00:10:45.095 Set Features (09h): Supported 00:10:45.095 Get Features (0Ah): Supported 00:10:45.095 Asynchronous Event Request (0Ch): Supported 00:10:45.095 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:45.095 Directive Send (19h): Supported 00:10:45.095 Directive Receive (1Ah): Supported 00:10:45.095 Virtualization Management (1Ch): Supported 00:10:45.095 Doorbell Buffer Config (7Ch): Supported 00:10:45.095 Format NVM (80h): Supported LBA-Change 00:10:45.095 I/O Commands 00:10:45.095 ------------ 00:10:45.095 Flush (00h): Supported LBA-Change 00:10:45.095 Write (01h): Supported LBA-Change 00:10:45.095 Read (02h): Supported 00:10:45.095 Compare (05h): Supported 00:10:45.095 Write Zeroes (08h): Supported LBA-Change 00:10:45.095 Dataset Management (09h): Supported LBA-Change 00:10:45.095 Unknown (0Ch): Supported 00:10:45.095 Unknown (12h): Supported 00:10:45.095 Copy (19h): Supported LBA-Change 00:10:45.095 Unknown (1Dh): Supported LBA-Change 00:10:45.095 00:10:45.095 Error Log 00:10:45.095 ========= 00:10:45.095 00:10:45.095 Arbitration 00:10:45.095 =========== 00:10:45.095 Arbitration Burst: no limit 00:10:45.095 00:10:45.095 Power Management 00:10:45.095 ================ 00:10:45.095 Number of Power States: 1 00:10:45.095 Current Power State: Power State #0 00:10:45.095 Power State #0: 00:10:45.095 Max Power: 25.00 W 00:10:45.095 Non-Operational State: Operational 00:10:45.095 Entry Latency: 16 microseconds 00:10:45.095 Exit Latency: 4 microseconds 00:10:45.095 Relative Read Throughput: 0 00:10:45.095 Relative Read Latency: 0 00:10:45.095 Relative Write Throughput: 0 00:10:45.095 Relative Write Latency: 0 00:10:45.095 Idle Power: Not Reported 00:10:45.095 Active Power: Not Reported 00:10:45.095 Non-Operational Permissive Mode: Not Supported 00:10:45.095 00:10:45.095 Health Information 00:10:45.095 ================== 00:10:45.095 Critical Warnings: 00:10:45.095 Available Spare Space: OK 00:10:45.095 Temperature: OK 00:10:45.095 Device Reliability: OK 00:10:45.095 Read Only: No 00:10:45.095 Volatile Memory Backup: OK 00:10:45.095 Current Temperature: 323 Kelvin (50 Celsius) 00:10:45.095 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:45.095 Available Spare: 0% 00:10:45.095 Available Spare Threshold: 0% 00:10:45.095 Life Percentage Used: 0% 00:10:45.095 Data Units Read: 927 00:10:45.095 Data Units Written: 856 00:10:45.095 Host Read Commands: 41097 00:10:45.095 Host Write Commands: 40520 00:10:45.095 Controller Busy Time: 0 minutes 00:10:45.095 Power Cycles: 0 00:10:45.095 Power On Hours: 0 hours 00:10:45.095 Unsafe Shutdowns: 0 00:10:45.095 Unrecoverable Media Errors: 0 00:10:45.095 Lifetime Error Log Entries: 0 00:10:45.095 Warning Temperature Time: 0 minutes 00:10:45.095 Critical Temperature Time: 0 minutes 00:10:45.095 00:10:45.095 Number of Queues 00:10:45.095 ================ 00:10:45.095 Number of I/O Submission Queues: 64 00:10:45.095 Number of I/O Completion Queues: 64 00:10:45.095 00:10:45.095 ZNS Specific Controller Data 00:10:45.095 ============================ 00:10:45.095 Zone Append Size Limit: 0 00:10:45.095 00:10:45.095 00:10:45.095 Active Namespaces 00:10:45.095 ================= 00:10:45.095 Namespace ID:1 00:10:45.095 Error Recovery Timeout: Unlimited 00:10:45.095 Command Set Identifier: NVM (00h) 00:10:45.095 Deallocate: Supported 00:10:45.095 Deallocated/Unwritten Error: Supported 00:10:45.095 Deallocated Read Value: All 0x00 00:10:45.095 Deallocate in Write Zeroes: Not Supported 00:10:45.095 Deallocated Guard Field: 0xFFFF 00:10:45.095 Flush: Supported 00:10:45.095 Reservation: Not Supported 00:10:45.095 Namespace Sharing Capabilities: Multiple Controllers 00:10:45.095 Size (in LBAs): 262144 (1GiB) 00:10:45.095 Capacity (in LBAs): 262144 (1GiB) 00:10:45.095 Utilization (in LBAs): 262144 (1GiB) 00:10:45.095 Thin Provisioning: Not Supported 00:10:45.095 Per-NS Atomic Units: No 00:10:45.095 Maximum Single Source Range Length: 128 00:10:45.095 Maximum Copy Length: 128 00:10:45.095 Maximum Source Range Count: 128 00:10:45.095 NGUID/EUI64 Never Reused: No 00:10:45.095 Namespace Write Protected: No 00:10:45.095 Endurance group ID: 1 00:10:45.095 Number of LBA Formats: 8 00:10:45.095 Current LBA Format: LBA Format #04 00:10:45.095 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:45.095 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:45.095 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:45.095 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:45.095 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:45.095 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:45.095 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:45.095 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:45.095 00:10:45.095 Get Feature FDP: 00:10:45.095 ================ 00:10:45.095 Enabled: Yes 00:10:45.095 FDP configuration index: 0 00:10:45.095 00:10:45.095 FDP configurations log page 00:10:45.095 =========================== 00:10:45.095 Number of FDP configurations: 1 00:10:45.095 Version: 0 00:10:45.095 Size: 112 00:10:45.095 FDP Configuration Descriptor: 0 00:10:45.095 Descriptor Size: 96 00:10:45.095 Reclaim Group Identifier format: 2 00:10:45.095 FDP Volatile Write Cache: Not Present 00:10:45.095 FDP Configuration: Valid 00:10:45.095 Vendor Specific Size: 0 00:10:45.095 Number of Reclaim Groups: 2 00:10:45.095 Number of Recalim Unit Handles: 8 00:10:45.095 Max Placement Identifiers: 128 00:10:45.095 Number of Namespaces Suppprted: 256 00:10:45.095 Reclaim unit Nominal Size: 6000000 bytes 00:10:45.095 Estimated Reclaim Unit Time Limit: Not Reported 00:10:45.095 RUH Desc #000: RUH Type: Initially Isolated 00:10:45.095 RUH Desc #001: RUH Type: Initially Isolated 00:10:45.095 RUH Desc #002: RUH Type: Initially Isolated 00:10:45.095 RUH Desc #003: RUH Type: Initially Isolated 00:10:45.095 RUH Desc #004: RUH Type: Initially Isolated 00:10:45.095 RUH Desc #005: RUH Type: Initially Isolated 00:10:45.095 RUH Desc #006: RUH Type: Initially Isolated 00:10:45.095 RUH Desc #007: RUH Type: Initially Isolated 00:10:45.095 00:10:45.095 FDP reclaim unit handle usage log page 00:10:45.095 ====================================== 00:10:45.095 Number of Reclaim Unit Handles: 8 00:10:45.095 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:45.095 RUH Usage Desc #001: RUH Attributes: Unused 00:10:45.095 RUH Usage Desc #002: RUH Attributes: Unused 00:10:45.095 RUH Usage Desc #003: RUH Attributes: Unused 00:10:45.095 RUH Usage Desc #004: RUH Attributes: Unused 00:10:45.095 RUH Usage Desc #005: RUH Attributes: Unused 00:10:45.095 RUH Usage Desc #006: RUH Attributes: Unused 00:10:45.095 RUH Usage Desc #007: RUH Attributes: Unused 00:10:45.095 00:10:45.095 FDP statistics log page 00:10:45.095 ======================= 00:10:45.095 Host bytes with metadata written: 552706048 00:10:45.095 Media bytes with metadata written: 553504768 00:10:45.095 Media bytes erased: 0 00:10:45.095 00:10:45.095 FDP events log page 00:10:45.095 =================== 00:10:45.095 Number of FDP events: 0 00:10:45.095 00:10:45.095 NVM Specific Namespace Data 00:10:45.095 =========================== 00:10:45.095 Logical Block Storage Tag Mask: 0 00:10:45.095 Protection Information Capabilities: 00:10:45.095 16b Guard Protection Information Storage Tag Support: No 00:10:45.095 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:45.095 Storage Tag Check Read Support: No 00:10:45.095 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:45.095 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:45.095 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:45.095 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:45.095 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:45.095 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:45.095 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:45.095 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:45.095 00:10:45.095 real 0m1.685s 00:10:45.095 user 0m0.611s 00:10:45.095 sys 0m0.844s 00:10:45.095 21:41:52 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.095 21:41:52 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:10:45.095 ************************************ 00:10:45.095 END TEST nvme_identify 00:10:45.095 ************************************ 00:10:45.095 21:41:52 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:10:45.095 21:41:52 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:45.095 21:41:52 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.095 21:41:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:45.095 ************************************ 00:10:45.095 START TEST nvme_perf 00:10:45.095 ************************************ 00:10:45.095 21:41:52 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:10:45.095 21:41:52 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:10:46.471 Initializing NVMe Controllers 00:10:46.471 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:46.471 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:46.471 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:46.471 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:46.471 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:46.471 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:46.471 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:46.471 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:46.471 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:46.471 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:46.471 Initialization complete. Launching workers. 00:10:46.471 ======================================================== 00:10:46.471 Latency(us) 00:10:46.471 Device Information : IOPS MiB/s Average min max 00:10:46.471 PCIE (0000:00:10.0) NSID 1 from core 0: 14078.90 164.99 9109.79 7999.41 51130.98 00:10:46.471 PCIE (0000:00:11.0) NSID 1 from core 0: 14078.90 164.99 9092.16 8057.89 48637.95 00:10:46.471 PCIE (0000:00:13.0) NSID 1 from core 0: 14078.90 164.99 9073.10 7995.92 46486.74 00:10:46.471 PCIE (0000:00:12.0) NSID 1 from core 0: 14078.90 164.99 9054.82 7998.64 44062.60 00:10:46.471 PCIE (0000:00:12.0) NSID 2 from core 0: 14078.90 164.99 9037.50 7955.24 41772.70 00:10:46.471 PCIE (0000:00:12.0) NSID 3 from core 0: 14142.90 165.74 8979.47 7977.60 35013.01 00:10:46.471 ======================================================== 00:10:46.471 Total : 84537.41 990.67 9057.75 7955.24 51130.98 00:10:46.471 00:10:46.471 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:46.471 ================================================================================= 00:10:46.471 1.00000% : 8106.461us 00:10:46.471 10.00000% : 8317.018us 00:10:46.471 25.00000% : 8527.576us 00:10:46.471 50.00000% : 8790.773us 00:10:46.471 75.00000% : 9001.330us 00:10:46.471 90.00000% : 9264.527us 00:10:46.471 95.00000% : 9475.084us 00:10:46.471 98.00000% : 10054.117us 00:10:46.471 99.00000% : 11475.380us 00:10:46.471 99.50000% : 44427.618us 00:10:46.471 99.90000% : 50744.341us 00:10:46.471 99.99000% : 51165.455us 00:10:46.471 99.99900% : 51165.455us 00:10:46.471 99.99990% : 51165.455us 00:10:46.471 99.99999% : 51165.455us 00:10:46.471 00:10:46.471 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:46.471 ================================================================================= 00:10:46.471 1.00000% : 8211.740us 00:10:46.471 10.00000% : 8369.658us 00:10:46.471 25.00000% : 8527.576us 00:10:46.471 50.00000% : 8790.773us 00:10:46.471 75.00000% : 9001.330us 00:10:46.471 90.00000% : 9211.888us 00:10:46.471 95.00000% : 9422.445us 00:10:46.471 98.00000% : 10212.035us 00:10:46.471 99.00000% : 11949.134us 00:10:46.471 99.50000% : 42111.486us 00:10:46.471 99.90000% : 48428.209us 00:10:46.471 99.99000% : 48638.766us 00:10:46.471 99.99900% : 48638.766us 00:10:46.471 99.99990% : 48638.766us 00:10:46.471 99.99999% : 48638.766us 00:10:46.471 00:10:46.471 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:46.471 ================================================================================= 00:10:46.471 1.00000% : 8211.740us 00:10:46.471 10.00000% : 8369.658us 00:10:46.471 25.00000% : 8527.576us 00:10:46.471 50.00000% : 8790.773us 00:10:46.471 75.00000% : 9001.330us 00:10:46.471 90.00000% : 9211.888us 00:10:46.471 95.00000% : 9369.806us 00:10:46.471 98.00000% : 10212.035us 00:10:46.471 99.00000% : 11949.134us 00:10:46.471 99.50000% : 40005.912us 00:10:46.471 99.90000% : 46112.077us 00:10:46.471 99.99000% : 46533.192us 00:10:46.471 99.99900% : 46533.192us 00:10:46.471 99.99990% : 46533.192us 00:10:46.471 99.99999% : 46533.192us 00:10:46.471 00:10:46.471 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:46.471 ================================================================================= 00:10:46.471 1.00000% : 8159.100us 00:10:46.471 10.00000% : 8369.658us 00:10:46.471 25.00000% : 8527.576us 00:10:46.471 50.00000% : 8790.773us 00:10:46.471 75.00000% : 9001.330us 00:10:46.471 90.00000% : 9211.888us 00:10:46.471 95.00000% : 9422.445us 00:10:46.471 98.00000% : 10264.675us 00:10:46.471 99.00000% : 12264.970us 00:10:46.471 99.50000% : 37900.337us 00:10:46.471 99.90000% : 43795.945us 00:10:46.471 99.99000% : 44217.060us 00:10:46.471 99.99900% : 44217.060us 00:10:46.471 99.99990% : 44217.060us 00:10:46.471 99.99999% : 44217.060us 00:10:46.471 00:10:46.471 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:46.471 ================================================================================= 00:10:46.471 1.00000% : 8159.100us 00:10:46.471 10.00000% : 8369.658us 00:10:46.471 25.00000% : 8527.576us 00:10:46.471 50.00000% : 8790.773us 00:10:46.471 75.00000% : 9001.330us 00:10:46.471 90.00000% : 9211.888us 00:10:46.471 95.00000% : 9422.445us 00:10:46.471 98.00000% : 10317.314us 00:10:46.471 99.00000% : 12633.446us 00:10:46.471 99.50000% : 35794.763us 00:10:46.471 99.90000% : 41479.814us 00:10:46.471 99.99000% : 41900.929us 00:10:46.471 99.99900% : 41900.929us 00:10:46.471 99.99990% : 41900.929us 00:10:46.471 99.99999% : 41900.929us 00:10:46.471 00:10:46.471 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:46.471 ================================================================================= 00:10:46.471 1.00000% : 8159.100us 00:10:46.471 10.00000% : 8369.658us 00:10:46.471 25.00000% : 8527.576us 00:10:46.471 50.00000% : 8790.773us 00:10:46.471 75.00000% : 9001.330us 00:10:46.471 90.00000% : 9211.888us 00:10:46.471 95.00000% : 9422.445us 00:10:46.471 98.00000% : 10475.232us 00:10:46.471 99.00000% : 13001.921us 00:10:46.471 99.50000% : 28635.810us 00:10:46.471 99.90000% : 34741.976us 00:10:46.471 99.99000% : 35163.091us 00:10:46.471 99.99900% : 35163.091us 00:10:46.471 99.99990% : 35163.091us 00:10:46.471 99.99999% : 35163.091us 00:10:46.471 00:10:46.471 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:46.471 ============================================================================== 00:10:46.471 Range in us Cumulative IO count 00:10:46.471 7948.543 - 8001.182: 0.0071% ( 1) 00:10:46.471 8001.182 - 8053.822: 0.3054% ( 42) 00:10:46.471 8053.822 - 8106.461: 1.1506% ( 119) 00:10:46.471 8106.461 - 8159.100: 2.8267% ( 236) 00:10:46.471 8159.100 - 8211.740: 5.0000% ( 306) 00:10:46.471 8211.740 - 8264.379: 8.0611% ( 431) 00:10:46.471 8264.379 - 8317.018: 11.5270% ( 488) 00:10:46.472 8317.018 - 8369.658: 15.5966% ( 573) 00:10:46.472 8369.658 - 8422.297: 19.8438% ( 598) 00:10:46.472 8422.297 - 8474.937: 24.3395% ( 633) 00:10:46.472 8474.937 - 8527.576: 29.0554% ( 664) 00:10:46.472 8527.576 - 8580.215: 33.9986% ( 696) 00:10:46.472 8580.215 - 8632.855: 39.1335% ( 723) 00:10:46.472 8632.855 - 8685.494: 44.2969% ( 727) 00:10:46.472 8685.494 - 8738.133: 49.5099% ( 734) 00:10:46.472 8738.133 - 8790.773: 54.6591% ( 725) 00:10:46.472 8790.773 - 8843.412: 59.9290% ( 742) 00:10:46.472 8843.412 - 8896.051: 65.4474% ( 777) 00:10:46.472 8896.051 - 8948.691: 70.6037% ( 726) 00:10:46.472 8948.691 - 9001.330: 75.4759% ( 686) 00:10:46.472 9001.330 - 9053.969: 79.9006% ( 623) 00:10:46.472 9053.969 - 9106.609: 83.6577% ( 529) 00:10:46.472 9106.609 - 9159.248: 86.7259% ( 432) 00:10:46.472 9159.248 - 9211.888: 89.3537% ( 370) 00:10:46.472 9211.888 - 9264.527: 91.3494% ( 281) 00:10:46.472 9264.527 - 9317.166: 92.8338% ( 209) 00:10:46.472 9317.166 - 9369.806: 93.9560% ( 158) 00:10:46.472 9369.806 - 9422.445: 94.8580% ( 127) 00:10:46.472 9422.445 - 9475.084: 95.5327% ( 95) 00:10:46.472 9475.084 - 9527.724: 95.9659% ( 61) 00:10:46.472 9527.724 - 9580.363: 96.4276% ( 65) 00:10:46.472 9580.363 - 9633.002: 96.7827% ( 50) 00:10:46.472 9633.002 - 9685.642: 97.1165% ( 47) 00:10:46.472 9685.642 - 9738.281: 97.2940% ( 25) 00:10:46.472 9738.281 - 9790.920: 97.4716% ( 25) 00:10:46.472 9790.920 - 9843.560: 97.6207% ( 21) 00:10:46.472 9843.560 - 9896.199: 97.7415% ( 17) 00:10:46.472 9896.199 - 9948.839: 97.8551% ( 16) 00:10:46.472 9948.839 - 10001.478: 97.9403% ( 12) 00:10:46.472 10001.478 - 10054.117: 98.0114% ( 10) 00:10:46.472 10054.117 - 10106.757: 98.0540% ( 6) 00:10:46.472 10106.757 - 10159.396: 98.1250% ( 10) 00:10:46.472 10159.396 - 10212.035: 98.1818% ( 8) 00:10:46.472 10212.035 - 10264.675: 98.2386% ( 8) 00:10:46.472 10264.675 - 10317.314: 98.3310% ( 13) 00:10:46.472 10317.314 - 10369.953: 98.3949% ( 9) 00:10:46.472 10369.953 - 10422.593: 98.4517% ( 8) 00:10:46.472 10422.593 - 10475.232: 98.5156% ( 9) 00:10:46.472 10475.232 - 10527.871: 98.5938% ( 11) 00:10:46.472 10527.871 - 10580.511: 98.6577% ( 9) 00:10:46.472 10580.511 - 10633.150: 98.7287% ( 10) 00:10:46.472 10633.150 - 10685.790: 98.7855% ( 8) 00:10:46.472 10685.790 - 10738.429: 98.8139% ( 4) 00:10:46.472 10738.429 - 10791.068: 98.8281% ( 2) 00:10:46.472 10791.068 - 10843.708: 98.8494% ( 3) 00:10:46.472 10843.708 - 10896.347: 98.8565% ( 1) 00:10:46.472 10896.347 - 10948.986: 98.8707% ( 2) 00:10:46.472 10948.986 - 11001.626: 98.8849% ( 2) 00:10:46.472 11001.626 - 11054.265: 98.8991% ( 2) 00:10:46.472 11054.265 - 11106.904: 98.9062% ( 1) 00:10:46.472 11106.904 - 11159.544: 98.9276% ( 3) 00:10:46.472 11159.544 - 11212.183: 98.9418% ( 2) 00:10:46.472 11212.183 - 11264.822: 98.9560% ( 2) 00:10:46.472 11264.822 - 11317.462: 98.9631% ( 1) 00:10:46.472 11317.462 - 11370.101: 98.9844% ( 3) 00:10:46.472 11370.101 - 11422.741: 98.9915% ( 1) 00:10:46.472 11422.741 - 11475.380: 99.0128% ( 3) 00:10:46.472 11475.380 - 11528.019: 99.0199% ( 1) 00:10:46.472 11528.019 - 11580.659: 99.0341% ( 2) 00:10:46.472 11580.659 - 11633.298: 99.0554% ( 3) 00:10:46.472 11633.298 - 11685.937: 99.0696% ( 2) 00:10:46.472 11685.937 - 11738.577: 99.0909% ( 3) 00:10:46.472 42322.043 - 42532.601: 99.0980% ( 1) 00:10:46.472 42532.601 - 42743.158: 99.1548% ( 8) 00:10:46.472 42743.158 - 42953.716: 99.1974% ( 6) 00:10:46.472 42953.716 - 43164.273: 99.2472% ( 7) 00:10:46.472 43164.273 - 43374.831: 99.2827% ( 5) 00:10:46.472 43374.831 - 43585.388: 99.3395% ( 8) 00:10:46.472 43585.388 - 43795.945: 99.3892% ( 7) 00:10:46.472 43795.945 - 44006.503: 99.4318% ( 6) 00:10:46.472 44006.503 - 44217.060: 99.4815% ( 7) 00:10:46.472 44217.060 - 44427.618: 99.5312% ( 7) 00:10:46.472 44427.618 - 44638.175: 99.5455% ( 2) 00:10:46.472 49059.881 - 49270.439: 99.5810% ( 5) 00:10:46.472 49270.439 - 49480.996: 99.6307% ( 7) 00:10:46.472 49480.996 - 49691.553: 99.6733% ( 6) 00:10:46.472 49691.553 - 49902.111: 99.7230% ( 7) 00:10:46.472 49902.111 - 50112.668: 99.7727% ( 7) 00:10:46.472 50112.668 - 50323.226: 99.8224% ( 7) 00:10:46.472 50323.226 - 50533.783: 99.8722% ( 7) 00:10:46.472 50533.783 - 50744.341: 99.9219% ( 7) 00:10:46.472 50744.341 - 50954.898: 99.9716% ( 7) 00:10:46.472 50954.898 - 51165.455: 100.0000% ( 4) 00:10:46.472 00:10:46.472 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:46.472 ============================================================================== 00:10:46.472 Range in us Cumulative IO count 00:10:46.472 8053.822 - 8106.461: 0.1562% ( 22) 00:10:46.472 8106.461 - 8159.100: 0.7812% ( 88) 00:10:46.472 8159.100 - 8211.740: 2.2514% ( 207) 00:10:46.472 8211.740 - 8264.379: 4.4389% ( 308) 00:10:46.472 8264.379 - 8317.018: 7.5213% ( 434) 00:10:46.472 8317.018 - 8369.658: 11.3707% ( 542) 00:10:46.472 8369.658 - 8422.297: 15.8381% ( 629) 00:10:46.472 8422.297 - 8474.937: 20.8239% ( 702) 00:10:46.472 8474.937 - 8527.576: 26.1861% ( 755) 00:10:46.472 8527.576 - 8580.215: 31.9673% ( 814) 00:10:46.472 8580.215 - 8632.855: 37.7557% ( 815) 00:10:46.472 8632.855 - 8685.494: 43.6861% ( 835) 00:10:46.472 8685.494 - 8738.133: 49.7727% ( 857) 00:10:46.472 8738.133 - 8790.773: 55.9091% ( 864) 00:10:46.472 8790.773 - 8843.412: 62.1307% ( 876) 00:10:46.472 8843.412 - 8896.051: 68.2315% ( 859) 00:10:46.472 8896.051 - 8948.691: 73.8849% ( 796) 00:10:46.472 8948.691 - 9001.330: 78.8210% ( 695) 00:10:46.472 9001.330 - 9053.969: 83.0114% ( 590) 00:10:46.472 9053.969 - 9106.609: 86.3778% ( 474) 00:10:46.472 9106.609 - 9159.248: 89.0980% ( 383) 00:10:46.472 9159.248 - 9211.888: 91.0938% ( 281) 00:10:46.472 9211.888 - 9264.527: 92.6705% ( 222) 00:10:46.472 9264.527 - 9317.166: 93.9347% ( 178) 00:10:46.472 9317.166 - 9369.806: 94.9077% ( 137) 00:10:46.472 9369.806 - 9422.445: 95.6392% ( 103) 00:10:46.472 9422.445 - 9475.084: 96.2003% ( 79) 00:10:46.472 9475.084 - 9527.724: 96.6903% ( 69) 00:10:46.472 9527.724 - 9580.363: 97.0241% ( 47) 00:10:46.472 9580.363 - 9633.002: 97.2301% ( 29) 00:10:46.472 9633.002 - 9685.642: 97.3651% ( 19) 00:10:46.472 9685.642 - 9738.281: 97.4432% ( 11) 00:10:46.472 9738.281 - 9790.920: 97.4929% ( 7) 00:10:46.472 9790.920 - 9843.560: 97.5355% ( 6) 00:10:46.472 9843.560 - 9896.199: 97.5710% ( 5) 00:10:46.472 9896.199 - 9948.839: 97.6634% ( 13) 00:10:46.472 9948.839 - 10001.478: 97.7273% ( 9) 00:10:46.472 10001.478 - 10054.117: 97.7983% ( 10) 00:10:46.472 10054.117 - 10106.757: 97.8977% ( 14) 00:10:46.472 10106.757 - 10159.396: 97.9972% ( 14) 00:10:46.472 10159.396 - 10212.035: 98.0682% ( 10) 00:10:46.472 10212.035 - 10264.675: 98.1250% ( 8) 00:10:46.472 10264.675 - 10317.314: 98.2031% ( 11) 00:10:46.472 10317.314 - 10369.953: 98.2884% ( 12) 00:10:46.472 10369.953 - 10422.593: 98.3594% ( 10) 00:10:46.472 10422.593 - 10475.232: 98.4233% ( 9) 00:10:46.472 10475.232 - 10527.871: 98.4943% ( 10) 00:10:46.472 10527.871 - 10580.511: 98.5440% ( 7) 00:10:46.472 10580.511 - 10633.150: 98.6009% ( 8) 00:10:46.472 10633.150 - 10685.790: 98.6293% ( 4) 00:10:46.472 10685.790 - 10738.429: 98.6506% ( 3) 00:10:46.472 10738.429 - 10791.068: 98.6648% ( 2) 00:10:46.472 10791.068 - 10843.708: 98.6719% ( 1) 00:10:46.472 10843.708 - 10896.347: 98.6861% ( 2) 00:10:46.472 10896.347 - 10948.986: 98.7074% ( 3) 00:10:46.472 10948.986 - 11001.626: 98.7145% ( 1) 00:10:46.472 11001.626 - 11054.265: 98.7429% ( 4) 00:10:46.472 11054.265 - 11106.904: 98.7500% ( 1) 00:10:46.472 11106.904 - 11159.544: 98.7713% ( 3) 00:10:46.472 11159.544 - 11212.183: 98.7855% ( 2) 00:10:46.472 11212.183 - 11264.822: 98.7997% ( 2) 00:10:46.472 11264.822 - 11317.462: 98.8210% ( 3) 00:10:46.472 11317.462 - 11370.101: 98.8423% ( 3) 00:10:46.472 11370.101 - 11422.741: 98.8565% ( 2) 00:10:46.472 11422.741 - 11475.380: 98.8707% ( 2) 00:10:46.472 11475.380 - 11528.019: 98.8849% ( 2) 00:10:46.472 11528.019 - 11580.659: 98.8991% ( 2) 00:10:46.472 11580.659 - 11633.298: 98.9205% ( 3) 00:10:46.472 11633.298 - 11685.937: 98.9347% ( 2) 00:10:46.472 11685.937 - 11738.577: 98.9489% ( 2) 00:10:46.472 11738.577 - 11791.216: 98.9631% ( 2) 00:10:46.472 11791.216 - 11843.855: 98.9773% ( 2) 00:10:46.472 11843.855 - 11896.495: 98.9986% ( 3) 00:10:46.472 11896.495 - 11949.134: 99.0199% ( 3) 00:10:46.472 11949.134 - 12001.773: 99.0341% ( 2) 00:10:46.472 12001.773 - 12054.413: 99.0554% ( 3) 00:10:46.472 12054.413 - 12107.052: 99.0696% ( 2) 00:10:46.472 12107.052 - 12159.692: 99.0838% ( 2) 00:10:46.472 12159.692 - 12212.331: 99.0909% ( 1) 00:10:46.472 40005.912 - 40216.469: 99.0980% ( 1) 00:10:46.472 40216.469 - 40427.027: 99.1477% ( 7) 00:10:46.472 40427.027 - 40637.584: 99.2045% ( 8) 00:10:46.472 40637.584 - 40848.141: 99.2543% ( 7) 00:10:46.473 40848.141 - 41058.699: 99.3111% ( 8) 00:10:46.473 41058.699 - 41269.256: 99.3537% ( 6) 00:10:46.473 41269.256 - 41479.814: 99.3963% ( 6) 00:10:46.473 41479.814 - 41690.371: 99.4460% ( 7) 00:10:46.473 41690.371 - 41900.929: 99.4886% ( 6) 00:10:46.473 41900.929 - 42111.486: 99.5384% ( 7) 00:10:46.473 42111.486 - 42322.043: 99.5455% ( 1) 00:10:46.473 46533.192 - 46743.749: 99.5668% ( 3) 00:10:46.473 46743.749 - 46954.307: 99.6236% ( 8) 00:10:46.473 46954.307 - 47164.864: 99.6662% ( 6) 00:10:46.473 47164.864 - 47375.422: 99.7017% ( 5) 00:10:46.473 47375.422 - 47585.979: 99.7514% ( 7) 00:10:46.473 47585.979 - 47796.537: 99.7940% ( 6) 00:10:46.473 47796.537 - 48007.094: 99.8438% ( 7) 00:10:46.473 48007.094 - 48217.651: 99.8935% ( 7) 00:10:46.473 48217.651 - 48428.209: 99.9432% ( 7) 00:10:46.473 48428.209 - 48638.766: 100.0000% ( 8) 00:10:46.473 00:10:46.473 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:46.473 ============================================================================== 00:10:46.473 Range in us Cumulative IO count 00:10:46.473 7948.543 - 8001.182: 0.0071% ( 1) 00:10:46.473 8001.182 - 8053.822: 0.0355% ( 4) 00:10:46.473 8053.822 - 8106.461: 0.1847% ( 21) 00:10:46.473 8106.461 - 8159.100: 0.8239% ( 90) 00:10:46.473 8159.100 - 8211.740: 2.2017% ( 194) 00:10:46.473 8211.740 - 8264.379: 4.6520% ( 345) 00:10:46.473 8264.379 - 8317.018: 7.6420% ( 421) 00:10:46.473 8317.018 - 8369.658: 11.5270% ( 547) 00:10:46.473 8369.658 - 8422.297: 15.9588% ( 624) 00:10:46.473 8422.297 - 8474.937: 20.9304% ( 700) 00:10:46.473 8474.937 - 8527.576: 26.1435% ( 734) 00:10:46.473 8527.576 - 8580.215: 31.6335% ( 773) 00:10:46.473 8580.215 - 8632.855: 37.4574% ( 820) 00:10:46.473 8632.855 - 8685.494: 43.4588% ( 845) 00:10:46.473 8685.494 - 8738.133: 49.6094% ( 866) 00:10:46.473 8738.133 - 8790.773: 55.7884% ( 870) 00:10:46.473 8790.773 - 8843.412: 62.0170% ( 877) 00:10:46.473 8843.412 - 8896.051: 68.2670% ( 880) 00:10:46.473 8896.051 - 8948.691: 73.9702% ( 803) 00:10:46.473 8948.691 - 9001.330: 78.9062% ( 695) 00:10:46.473 9001.330 - 9053.969: 82.9545% ( 570) 00:10:46.473 9053.969 - 9106.609: 86.3210% ( 474) 00:10:46.473 9106.609 - 9159.248: 89.0696% ( 387) 00:10:46.473 9159.248 - 9211.888: 91.2003% ( 300) 00:10:46.473 9211.888 - 9264.527: 92.9048% ( 240) 00:10:46.473 9264.527 - 9317.166: 94.1832% ( 180) 00:10:46.473 9317.166 - 9369.806: 95.1705% ( 139) 00:10:46.473 9369.806 - 9422.445: 95.9375% ( 108) 00:10:46.473 9422.445 - 9475.084: 96.4489% ( 72) 00:10:46.473 9475.084 - 9527.724: 96.8182% ( 52) 00:10:46.473 9527.724 - 9580.363: 97.1236% ( 43) 00:10:46.473 9580.363 - 9633.002: 97.3224% ( 28) 00:10:46.473 9633.002 - 9685.642: 97.4716% ( 21) 00:10:46.473 9685.642 - 9738.281: 97.5355% ( 9) 00:10:46.473 9738.281 - 9790.920: 97.5568% ( 3) 00:10:46.473 9790.920 - 9843.560: 97.5781% ( 3) 00:10:46.473 9843.560 - 9896.199: 97.6065% ( 4) 00:10:46.473 9896.199 - 9948.839: 97.6847% ( 11) 00:10:46.473 9948.839 - 10001.478: 97.7628% ( 11) 00:10:46.473 10001.478 - 10054.117: 97.8409% ( 11) 00:10:46.473 10054.117 - 10106.757: 97.9119% ( 10) 00:10:46.473 10106.757 - 10159.396: 97.9901% ( 11) 00:10:46.473 10159.396 - 10212.035: 98.0540% ( 9) 00:10:46.473 10212.035 - 10264.675: 98.1392% ( 12) 00:10:46.473 10264.675 - 10317.314: 98.2031% ( 9) 00:10:46.473 10317.314 - 10369.953: 98.2599% ( 8) 00:10:46.473 10369.953 - 10422.593: 98.3168% ( 8) 00:10:46.473 10422.593 - 10475.232: 98.3736% ( 8) 00:10:46.473 10475.232 - 10527.871: 98.4375% ( 9) 00:10:46.473 10527.871 - 10580.511: 98.4943% ( 8) 00:10:46.473 10580.511 - 10633.150: 98.5440% ( 7) 00:10:46.473 10633.150 - 10685.790: 98.5795% ( 5) 00:10:46.473 10685.790 - 10738.429: 98.6293% ( 7) 00:10:46.473 10738.429 - 10791.068: 98.6719% ( 6) 00:10:46.473 10791.068 - 10843.708: 98.6861% ( 2) 00:10:46.473 10843.708 - 10896.347: 98.7003% ( 2) 00:10:46.473 10896.347 - 10948.986: 98.7145% ( 2) 00:10:46.473 10948.986 - 11001.626: 98.7287% ( 2) 00:10:46.473 11001.626 - 11054.265: 98.7500% ( 3) 00:10:46.473 11054.265 - 11106.904: 98.7642% ( 2) 00:10:46.473 11106.904 - 11159.544: 98.7784% ( 2) 00:10:46.473 11159.544 - 11212.183: 98.7926% ( 2) 00:10:46.473 11212.183 - 11264.822: 98.8139% ( 3) 00:10:46.473 11264.822 - 11317.462: 98.8281% ( 2) 00:10:46.473 11317.462 - 11370.101: 98.8423% ( 2) 00:10:46.473 11370.101 - 11422.741: 98.8636% ( 3) 00:10:46.473 11422.741 - 11475.380: 98.8778% ( 2) 00:10:46.473 11475.380 - 11528.019: 98.8920% ( 2) 00:10:46.473 11528.019 - 11580.659: 98.9134% ( 3) 00:10:46.473 11580.659 - 11633.298: 98.9276% ( 2) 00:10:46.473 11633.298 - 11685.937: 98.9418% ( 2) 00:10:46.473 11685.937 - 11738.577: 98.9560% ( 2) 00:10:46.473 11738.577 - 11791.216: 98.9702% ( 2) 00:10:46.473 11791.216 - 11843.855: 98.9844% ( 2) 00:10:46.473 11843.855 - 11896.495: 98.9986% ( 2) 00:10:46.473 11896.495 - 11949.134: 99.0199% ( 3) 00:10:46.473 11949.134 - 12001.773: 99.0341% ( 2) 00:10:46.473 12001.773 - 12054.413: 99.0483% ( 2) 00:10:46.473 12054.413 - 12107.052: 99.0696% ( 3) 00:10:46.473 12107.052 - 12159.692: 99.0838% ( 2) 00:10:46.473 12159.692 - 12212.331: 99.0909% ( 1) 00:10:46.473 38110.895 - 38321.452: 99.1051% ( 2) 00:10:46.473 38321.452 - 38532.010: 99.1548% ( 7) 00:10:46.473 38532.010 - 38742.567: 99.1903% ( 5) 00:10:46.473 38742.567 - 38953.124: 99.2401% ( 7) 00:10:46.473 38953.124 - 39163.682: 99.2969% ( 8) 00:10:46.473 39163.682 - 39374.239: 99.3466% ( 7) 00:10:46.473 39374.239 - 39584.797: 99.4034% ( 8) 00:10:46.473 39584.797 - 39795.354: 99.4531% ( 7) 00:10:46.473 39795.354 - 40005.912: 99.5028% ( 7) 00:10:46.473 40005.912 - 40216.469: 99.5455% ( 6) 00:10:46.473 44427.618 - 44638.175: 99.5739% ( 4) 00:10:46.473 44638.175 - 44848.733: 99.6165% ( 6) 00:10:46.473 44848.733 - 45059.290: 99.6591% ( 6) 00:10:46.473 45059.290 - 45269.847: 99.7088% ( 7) 00:10:46.473 45269.847 - 45480.405: 99.7514% ( 6) 00:10:46.473 45480.405 - 45690.962: 99.7940% ( 6) 00:10:46.473 45690.962 - 45901.520: 99.8509% ( 8) 00:10:46.473 45901.520 - 46112.077: 99.9077% ( 8) 00:10:46.473 46112.077 - 46322.635: 99.9574% ( 7) 00:10:46.473 46322.635 - 46533.192: 100.0000% ( 6) 00:10:46.473 00:10:46.473 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:46.473 ============================================================================== 00:10:46.473 Range in us Cumulative IO count 00:10:46.473 7948.543 - 8001.182: 0.0071% ( 1) 00:10:46.473 8001.182 - 8053.822: 0.0710% ( 9) 00:10:46.473 8053.822 - 8106.461: 0.4972% ( 60) 00:10:46.473 8106.461 - 8159.100: 1.7685% ( 179) 00:10:46.473 8159.100 - 8211.740: 3.6506% ( 265) 00:10:46.473 8211.740 - 8264.379: 6.1790% ( 356) 00:10:46.473 8264.379 - 8317.018: 9.5881% ( 480) 00:10:46.473 8317.018 - 8369.658: 13.5795% ( 562) 00:10:46.473 8369.658 - 8422.297: 17.8480% ( 601) 00:10:46.473 8422.297 - 8474.937: 22.5852% ( 667) 00:10:46.473 8474.937 - 8527.576: 27.6136% ( 708) 00:10:46.473 8527.576 - 8580.215: 32.8267% ( 734) 00:10:46.473 8580.215 - 8632.855: 38.3452% ( 777) 00:10:46.473 8632.855 - 8685.494: 43.8494% ( 775) 00:10:46.473 8685.494 - 8738.133: 49.5241% ( 799) 00:10:46.473 8738.133 - 8790.773: 55.0852% ( 783) 00:10:46.473 8790.773 - 8843.412: 60.8736% ( 815) 00:10:46.473 8843.412 - 8896.051: 66.5909% ( 805) 00:10:46.473 8896.051 - 8948.691: 72.1520% ( 783) 00:10:46.473 8948.691 - 9001.330: 77.0739% ( 693) 00:10:46.473 9001.330 - 9053.969: 81.2784% ( 592) 00:10:46.473 9053.969 - 9106.609: 84.9858% ( 522) 00:10:46.473 9106.609 - 9159.248: 87.9261% ( 414) 00:10:46.473 9159.248 - 9211.888: 90.2557% ( 328) 00:10:46.473 9211.888 - 9264.527: 92.0668% ( 255) 00:10:46.473 9264.527 - 9317.166: 93.5440% ( 208) 00:10:46.473 9317.166 - 9369.806: 94.6662% ( 158) 00:10:46.473 9369.806 - 9422.445: 95.5327% ( 122) 00:10:46.473 9422.445 - 9475.084: 96.1364% ( 85) 00:10:46.473 9475.084 - 9527.724: 96.5909% ( 64) 00:10:46.473 9527.724 - 9580.363: 96.9034% ( 44) 00:10:46.473 9580.363 - 9633.002: 97.0739% ( 24) 00:10:46.473 9633.002 - 9685.642: 97.2585% ( 26) 00:10:46.473 9685.642 - 9738.281: 97.3935% ( 19) 00:10:46.473 9738.281 - 9790.920: 97.4503% ( 8) 00:10:46.473 9790.920 - 9843.560: 97.4716% ( 3) 00:10:46.473 9843.560 - 9896.199: 97.5284% ( 8) 00:10:46.473 9896.199 - 9948.839: 97.5852% ( 8) 00:10:46.473 9948.839 - 10001.478: 97.6705% ( 12) 00:10:46.473 10001.478 - 10054.117: 97.7202% ( 7) 00:10:46.473 10054.117 - 10106.757: 97.7983% ( 11) 00:10:46.473 10106.757 - 10159.396: 97.8764% ( 11) 00:10:46.473 10159.396 - 10212.035: 97.9545% ( 11) 00:10:46.473 10212.035 - 10264.675: 98.0256% ( 10) 00:10:46.473 10264.675 - 10317.314: 98.0966% ( 10) 00:10:46.473 10317.314 - 10369.953: 98.1676% ( 10) 00:10:46.473 10369.953 - 10422.593: 98.2315% ( 9) 00:10:46.473 10422.593 - 10475.232: 98.3026% ( 10) 00:10:46.473 10475.232 - 10527.871: 98.3736% ( 10) 00:10:46.473 10527.871 - 10580.511: 98.4517% ( 11) 00:10:46.473 10580.511 - 10633.150: 98.5227% ( 10) 00:10:46.473 10633.150 - 10685.790: 98.5866% ( 9) 00:10:46.473 10685.790 - 10738.429: 98.6293% ( 6) 00:10:46.473 10738.429 - 10791.068: 98.6364% ( 1) 00:10:46.473 11106.904 - 11159.544: 98.6506% ( 2) 00:10:46.473 11159.544 - 11212.183: 98.6790% ( 4) 00:10:46.473 11212.183 - 11264.822: 98.7003% ( 3) 00:10:46.473 11264.822 - 11317.462: 98.7074% ( 1) 00:10:46.473 11317.462 - 11370.101: 98.7216% ( 2) 00:10:46.473 11370.101 - 11422.741: 98.7358% ( 2) 00:10:46.473 11422.741 - 11475.380: 98.7500% ( 2) 00:10:46.473 11475.380 - 11528.019: 98.7713% ( 3) 00:10:46.473 11528.019 - 11580.659: 98.7855% ( 2) 00:10:46.473 11580.659 - 11633.298: 98.7997% ( 2) 00:10:46.473 11633.298 - 11685.937: 98.8210% ( 3) 00:10:46.473 11685.937 - 11738.577: 98.8423% ( 3) 00:10:46.473 11738.577 - 11791.216: 98.8636% ( 3) 00:10:46.474 11791.216 - 11843.855: 98.8778% ( 2) 00:10:46.474 11843.855 - 11896.495: 98.8920% ( 2) 00:10:46.474 11896.495 - 11949.134: 98.9134% ( 3) 00:10:46.474 11949.134 - 12001.773: 98.9418% ( 4) 00:10:46.474 12001.773 - 12054.413: 98.9560% ( 2) 00:10:46.474 12054.413 - 12107.052: 98.9702% ( 2) 00:10:46.474 12107.052 - 12159.692: 98.9844% ( 2) 00:10:46.474 12159.692 - 12212.331: 98.9986% ( 2) 00:10:46.474 12212.331 - 12264.970: 99.0057% ( 1) 00:10:46.474 12264.970 - 12317.610: 99.0199% ( 2) 00:10:46.474 12317.610 - 12370.249: 99.0341% ( 2) 00:10:46.474 12370.249 - 12422.888: 99.0554% ( 3) 00:10:46.474 12422.888 - 12475.528: 99.0696% ( 2) 00:10:46.474 12475.528 - 12528.167: 99.0838% ( 2) 00:10:46.474 12528.167 - 12580.806: 99.0909% ( 1) 00:10:46.474 36005.320 - 36215.878: 99.1193% ( 4) 00:10:46.474 36215.878 - 36426.435: 99.1619% ( 6) 00:10:46.474 36426.435 - 36636.993: 99.2116% ( 7) 00:10:46.474 36636.993 - 36847.550: 99.2685% ( 8) 00:10:46.474 36847.550 - 37058.108: 99.3182% ( 7) 00:10:46.474 37058.108 - 37268.665: 99.3750% ( 8) 00:10:46.474 37268.665 - 37479.222: 99.4247% ( 7) 00:10:46.474 37479.222 - 37689.780: 99.4744% ( 7) 00:10:46.474 37689.780 - 37900.337: 99.5241% ( 7) 00:10:46.474 37900.337 - 38110.895: 99.5455% ( 3) 00:10:46.474 42111.486 - 42322.043: 99.5952% ( 7) 00:10:46.474 42322.043 - 42532.601: 99.6449% ( 7) 00:10:46.474 42532.601 - 42743.158: 99.6946% ( 7) 00:10:46.474 42743.158 - 42953.716: 99.7443% ( 7) 00:10:46.474 42953.716 - 43164.273: 99.7869% ( 6) 00:10:46.474 43164.273 - 43374.831: 99.8366% ( 7) 00:10:46.474 43374.831 - 43585.388: 99.8864% ( 7) 00:10:46.474 43585.388 - 43795.945: 99.9290% ( 6) 00:10:46.474 43795.945 - 44006.503: 99.9787% ( 7) 00:10:46.474 44006.503 - 44217.060: 100.0000% ( 3) 00:10:46.474 00:10:46.474 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:46.474 ============================================================================== 00:10:46.474 Range in us Cumulative IO count 00:10:46.474 7948.543 - 8001.182: 0.0284% ( 4) 00:10:46.474 8001.182 - 8053.822: 0.0994% ( 10) 00:10:46.474 8053.822 - 8106.461: 0.5256% ( 60) 00:10:46.474 8106.461 - 8159.100: 1.6477% ( 158) 00:10:46.474 8159.100 - 8211.740: 3.6364% ( 280) 00:10:46.474 8211.740 - 8264.379: 6.1932% ( 360) 00:10:46.474 8264.379 - 8317.018: 9.7230% ( 497) 00:10:46.474 8317.018 - 8369.658: 13.6151% ( 548) 00:10:46.474 8369.658 - 8422.297: 17.9474% ( 610) 00:10:46.474 8422.297 - 8474.937: 22.7060% ( 670) 00:10:46.474 8474.937 - 8527.576: 27.7699% ( 713) 00:10:46.474 8527.576 - 8580.215: 32.9048% ( 723) 00:10:46.474 8580.215 - 8632.855: 38.3736% ( 770) 00:10:46.474 8632.855 - 8685.494: 43.8707% ( 774) 00:10:46.474 8685.494 - 8738.133: 49.4957% ( 792) 00:10:46.474 8738.133 - 8790.773: 55.2131% ( 805) 00:10:46.474 8790.773 - 8843.412: 60.8949% ( 800) 00:10:46.474 8843.412 - 8896.051: 66.5412% ( 795) 00:10:46.474 8896.051 - 8948.691: 72.1236% ( 786) 00:10:46.474 8948.691 - 9001.330: 77.0881% ( 699) 00:10:46.474 9001.330 - 9053.969: 81.3778% ( 604) 00:10:46.474 9053.969 - 9106.609: 84.9574% ( 504) 00:10:46.474 9106.609 - 9159.248: 87.8835% ( 412) 00:10:46.474 9159.248 - 9211.888: 90.1278% ( 316) 00:10:46.474 9211.888 - 9264.527: 91.9531% ( 257) 00:10:46.474 9264.527 - 9317.166: 93.4375% ( 209) 00:10:46.474 9317.166 - 9369.806: 94.5739% ( 160) 00:10:46.474 9369.806 - 9422.445: 95.3835% ( 114) 00:10:46.474 9422.445 - 9475.084: 95.9872% ( 85) 00:10:46.474 9475.084 - 9527.724: 96.3920% ( 57) 00:10:46.474 9527.724 - 9580.363: 96.7045% ( 44) 00:10:46.474 9580.363 - 9633.002: 96.9460% ( 34) 00:10:46.474 9633.002 - 9685.642: 97.0952% ( 21) 00:10:46.474 9685.642 - 9738.281: 97.2230% ( 18) 00:10:46.474 9738.281 - 9790.920: 97.3153% ( 13) 00:10:46.474 9790.920 - 9843.560: 97.3722% ( 8) 00:10:46.474 9843.560 - 9896.199: 97.4219% ( 7) 00:10:46.474 9896.199 - 9948.839: 97.4929% ( 10) 00:10:46.474 9948.839 - 10001.478: 97.5568% ( 9) 00:10:46.474 10001.478 - 10054.117: 97.6349% ( 11) 00:10:46.474 10054.117 - 10106.757: 97.7060% ( 10) 00:10:46.474 10106.757 - 10159.396: 97.7912% ( 12) 00:10:46.474 10159.396 - 10212.035: 97.8622% ( 10) 00:10:46.474 10212.035 - 10264.675: 97.9332% ( 10) 00:10:46.474 10264.675 - 10317.314: 98.0114% ( 11) 00:10:46.474 10317.314 - 10369.953: 98.0753% ( 9) 00:10:46.474 10369.953 - 10422.593: 98.1463% ( 10) 00:10:46.474 10422.593 - 10475.232: 98.2244% ( 11) 00:10:46.474 10475.232 - 10527.871: 98.3026% ( 11) 00:10:46.474 10527.871 - 10580.511: 98.3736% ( 10) 00:10:46.474 10580.511 - 10633.150: 98.4446% ( 10) 00:10:46.474 10633.150 - 10685.790: 98.5156% ( 10) 00:10:46.474 10685.790 - 10738.429: 98.5582% ( 6) 00:10:46.474 10738.429 - 10791.068: 98.5866% ( 4) 00:10:46.474 10791.068 - 10843.708: 98.6009% ( 2) 00:10:46.474 10843.708 - 10896.347: 98.6222% ( 3) 00:10:46.474 10896.347 - 10948.986: 98.6364% ( 2) 00:10:46.474 11422.741 - 11475.380: 98.6435% ( 1) 00:10:46.474 11475.380 - 11528.019: 98.6648% ( 3) 00:10:46.474 11528.019 - 11580.659: 98.6861% ( 3) 00:10:46.474 11580.659 - 11633.298: 98.7003% ( 2) 00:10:46.474 11633.298 - 11685.937: 98.7145% ( 2) 00:10:46.474 11685.937 - 11738.577: 98.7287% ( 2) 00:10:46.474 11738.577 - 11791.216: 98.7429% ( 2) 00:10:46.474 11791.216 - 11843.855: 98.7642% ( 3) 00:10:46.474 11843.855 - 11896.495: 98.7784% ( 2) 00:10:46.474 11896.495 - 11949.134: 98.7926% ( 2) 00:10:46.474 11949.134 - 12001.773: 98.8068% ( 2) 00:10:46.474 12001.773 - 12054.413: 98.8210% ( 2) 00:10:46.474 12054.413 - 12107.052: 98.8423% ( 3) 00:10:46.474 12107.052 - 12159.692: 98.8565% ( 2) 00:10:46.474 12159.692 - 12212.331: 98.8707% ( 2) 00:10:46.474 12212.331 - 12264.970: 98.8920% ( 3) 00:10:46.474 12264.970 - 12317.610: 98.9062% ( 2) 00:10:46.474 12317.610 - 12370.249: 98.9276% ( 3) 00:10:46.474 12370.249 - 12422.888: 98.9489% ( 3) 00:10:46.474 12422.888 - 12475.528: 98.9631% ( 2) 00:10:46.474 12475.528 - 12528.167: 98.9844% ( 3) 00:10:46.474 12528.167 - 12580.806: 98.9986% ( 2) 00:10:46.474 12580.806 - 12633.446: 99.0128% ( 2) 00:10:46.474 12633.446 - 12686.085: 99.0270% ( 2) 00:10:46.474 12686.085 - 12738.724: 99.0483% ( 3) 00:10:46.474 12738.724 - 12791.364: 99.0625% ( 2) 00:10:46.474 12791.364 - 12844.003: 99.0838% ( 3) 00:10:46.474 12844.003 - 12896.643: 99.0909% ( 1) 00:10:46.474 33689.189 - 33899.746: 99.0980% ( 1) 00:10:46.474 33899.746 - 34110.304: 99.1477% ( 7) 00:10:46.474 34110.304 - 34320.861: 99.1974% ( 7) 00:10:46.474 34320.861 - 34531.418: 99.2472% ( 7) 00:10:46.474 34531.418 - 34741.976: 99.3040% ( 8) 00:10:46.474 34741.976 - 34952.533: 99.3608% ( 8) 00:10:46.474 34952.533 - 35163.091: 99.4105% ( 7) 00:10:46.474 35163.091 - 35373.648: 99.4247% ( 2) 00:10:46.474 35373.648 - 35584.206: 99.4744% ( 7) 00:10:46.474 35584.206 - 35794.763: 99.5241% ( 7) 00:10:46.474 35794.763 - 36005.320: 99.5455% ( 3) 00:10:46.474 39795.354 - 40005.912: 99.5668% ( 3) 00:10:46.474 40005.912 - 40216.469: 99.6165% ( 7) 00:10:46.474 40216.469 - 40427.027: 99.6591% ( 6) 00:10:46.474 40427.027 - 40637.584: 99.7159% ( 8) 00:10:46.474 40637.584 - 40848.141: 99.7585% ( 6) 00:10:46.474 40848.141 - 41058.699: 99.8153% ( 8) 00:10:46.474 41058.699 - 41269.256: 99.8651% ( 7) 00:10:46.474 41269.256 - 41479.814: 99.9219% ( 8) 00:10:46.474 41479.814 - 41690.371: 99.9787% ( 8) 00:10:46.474 41690.371 - 41900.929: 100.0000% ( 3) 00:10:46.474 00:10:46.474 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:46.474 ============================================================================== 00:10:46.474 Range in us Cumulative IO count 00:10:46.474 7948.543 - 8001.182: 0.0212% ( 3) 00:10:46.474 8001.182 - 8053.822: 0.1343% ( 16) 00:10:46.474 8053.822 - 8106.461: 0.5090% ( 53) 00:10:46.474 8106.461 - 8159.100: 1.6827% ( 166) 00:10:46.474 8159.100 - 8211.740: 3.5068% ( 258) 00:10:46.474 8211.740 - 8264.379: 6.3702% ( 405) 00:10:46.474 8264.379 - 8317.018: 9.5376% ( 448) 00:10:46.474 8317.018 - 8369.658: 13.5535% ( 568) 00:10:46.474 8369.658 - 8422.297: 18.0288% ( 633) 00:10:46.474 8422.297 - 8474.937: 22.7446% ( 667) 00:10:46.474 8474.937 - 8527.576: 27.7715% ( 711) 00:10:46.474 8527.576 - 8580.215: 33.0812% ( 751) 00:10:46.474 8580.215 - 8632.855: 38.4474% ( 759) 00:10:46.474 8632.855 - 8685.494: 43.8419% ( 763) 00:10:46.474 8685.494 - 8738.133: 49.3849% ( 784) 00:10:46.474 8738.133 - 8790.773: 55.0622% ( 803) 00:10:46.474 8790.773 - 8843.412: 60.7466% ( 804) 00:10:46.474 8843.412 - 8896.051: 66.4734% ( 810) 00:10:46.474 8896.051 - 8948.691: 71.9740% ( 778) 00:10:46.474 8948.691 - 9001.330: 77.0079% ( 712) 00:10:46.474 9001.330 - 9053.969: 81.2712% ( 603) 00:10:46.474 9053.969 - 9106.609: 84.7851% ( 497) 00:10:46.474 9106.609 - 9159.248: 87.6838% ( 410) 00:10:46.474 9159.248 - 9211.888: 90.0311% ( 332) 00:10:46.474 9211.888 - 9264.527: 91.8764% ( 261) 00:10:46.474 9264.527 - 9317.166: 93.2763% ( 198) 00:10:46.474 9317.166 - 9369.806: 94.3510% ( 152) 00:10:46.474 9369.806 - 9422.445: 95.1640% ( 115) 00:10:46.474 9422.445 - 9475.084: 95.6801% ( 73) 00:10:46.474 9475.084 - 9527.724: 96.1468% ( 66) 00:10:46.474 9527.724 - 9580.363: 96.5074% ( 51) 00:10:46.474 9580.363 - 9633.002: 96.7689% ( 37) 00:10:46.474 9633.002 - 9685.642: 96.9598% ( 27) 00:10:46.474 9685.642 - 9738.281: 97.1012% ( 20) 00:10:46.474 9738.281 - 9790.920: 97.2002% ( 14) 00:10:46.474 9790.920 - 9843.560: 97.2780% ( 11) 00:10:46.474 9843.560 - 9896.199: 97.3416% ( 9) 00:10:46.474 9896.199 - 9948.839: 97.3628% ( 3) 00:10:46.474 9948.839 - 10001.478: 97.4265% ( 9) 00:10:46.474 10001.478 - 10054.117: 97.4972% ( 10) 00:10:46.474 10054.117 - 10106.757: 97.5467% ( 7) 00:10:46.474 10106.757 - 10159.396: 97.6174% ( 10) 00:10:46.474 10159.396 - 10212.035: 97.6739% ( 8) 00:10:46.474 10212.035 - 10264.675: 97.7446% ( 10) 00:10:46.474 10264.675 - 10317.314: 97.8012% ( 8) 00:10:46.474 10317.314 - 10369.953: 97.8790% ( 11) 00:10:46.475 10369.953 - 10422.593: 97.9497% ( 10) 00:10:46.475 10422.593 - 10475.232: 98.0204% ( 10) 00:10:46.475 10475.232 - 10527.871: 98.0911% ( 10) 00:10:46.475 10527.871 - 10580.511: 98.1759% ( 12) 00:10:46.475 10580.511 - 10633.150: 98.2607% ( 12) 00:10:46.475 10633.150 - 10685.790: 98.3597% ( 14) 00:10:46.475 10685.790 - 10738.429: 98.4163% ( 8) 00:10:46.475 10738.429 - 10791.068: 98.4587% ( 6) 00:10:46.475 10791.068 - 10843.708: 98.4941% ( 5) 00:10:46.475 10843.708 - 10896.347: 98.5082% ( 2) 00:10:46.475 10896.347 - 10948.986: 98.5294% ( 3) 00:10:46.475 10948.986 - 11001.626: 98.5436% ( 2) 00:10:46.475 11001.626 - 11054.265: 98.5577% ( 2) 00:10:46.475 11054.265 - 11106.904: 98.5789% ( 3) 00:10:46.475 11106.904 - 11159.544: 98.6001% ( 3) 00:10:46.475 11159.544 - 11212.183: 98.6143% ( 2) 00:10:46.475 11212.183 - 11264.822: 98.6284% ( 2) 00:10:46.475 11264.822 - 11317.462: 98.6425% ( 2) 00:10:46.475 11791.216 - 11843.855: 98.6567% ( 2) 00:10:46.475 11843.855 - 11896.495: 98.6779% ( 3) 00:10:46.475 11896.495 - 11949.134: 98.6920% ( 2) 00:10:46.475 11949.134 - 12001.773: 98.7062% ( 2) 00:10:46.475 12001.773 - 12054.413: 98.7203% ( 2) 00:10:46.475 12054.413 - 12107.052: 98.7344% ( 2) 00:10:46.475 12107.052 - 12159.692: 98.7486% ( 2) 00:10:46.475 12159.692 - 12212.331: 98.7627% ( 2) 00:10:46.475 12212.331 - 12264.970: 98.7769% ( 2) 00:10:46.475 12264.970 - 12317.610: 98.7910% ( 2) 00:10:46.475 12317.610 - 12370.249: 98.8193% ( 4) 00:10:46.475 12370.249 - 12422.888: 98.8334% ( 2) 00:10:46.475 12422.888 - 12475.528: 98.8546% ( 3) 00:10:46.475 12475.528 - 12528.167: 98.8688% ( 2) 00:10:46.475 12528.167 - 12580.806: 98.8829% ( 2) 00:10:46.475 12580.806 - 12633.446: 98.8971% ( 2) 00:10:46.475 12633.446 - 12686.085: 98.9112% ( 2) 00:10:46.475 12686.085 - 12738.724: 98.9324% ( 3) 00:10:46.475 12738.724 - 12791.364: 98.9465% ( 2) 00:10:46.475 12791.364 - 12844.003: 98.9678% ( 3) 00:10:46.475 12844.003 - 12896.643: 98.9819% ( 2) 00:10:46.475 12896.643 - 12949.282: 98.9960% ( 2) 00:10:46.475 12949.282 - 13001.921: 99.0102% ( 2) 00:10:46.475 13001.921 - 13054.561: 99.0314% ( 3) 00:10:46.475 13054.561 - 13107.200: 99.0455% ( 2) 00:10:46.475 13107.200 - 13159.839: 99.0597% ( 2) 00:10:46.475 13159.839 - 13212.479: 99.0809% ( 3) 00:10:46.475 13212.479 - 13265.118: 99.0880% ( 1) 00:10:46.475 13265.118 - 13317.757: 99.0950% ( 1) 00:10:46.475 26846.072 - 26951.351: 99.1021% ( 1) 00:10:46.475 26951.351 - 27161.908: 99.1587% ( 8) 00:10:46.475 27161.908 - 27372.466: 99.2081% ( 7) 00:10:46.475 27372.466 - 27583.023: 99.2647% ( 8) 00:10:46.475 27583.023 - 27793.581: 99.3142% ( 7) 00:10:46.475 27793.581 - 28004.138: 99.3637% ( 7) 00:10:46.475 28004.138 - 28214.696: 99.4132% ( 7) 00:10:46.475 28214.696 - 28425.253: 99.4697% ( 8) 00:10:46.475 28425.253 - 28635.810: 99.5263% ( 8) 00:10:46.475 28635.810 - 28846.368: 99.5475% ( 3) 00:10:46.475 33057.516 - 33268.074: 99.5617% ( 2) 00:10:46.475 33268.074 - 33478.631: 99.6182% ( 8) 00:10:46.475 33478.631 - 33689.189: 99.6677% ( 7) 00:10:46.475 33689.189 - 33899.746: 99.7243% ( 8) 00:10:46.475 33899.746 - 34110.304: 99.7667% ( 6) 00:10:46.475 34110.304 - 34320.861: 99.8232% ( 8) 00:10:46.475 34320.861 - 34531.418: 99.8727% ( 7) 00:10:46.475 34531.418 - 34741.976: 99.9293% ( 8) 00:10:46.475 34741.976 - 34952.533: 99.9788% ( 7) 00:10:46.475 34952.533 - 35163.091: 100.0000% ( 3) 00:10:46.475 00:10:46.475 21:41:54 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:10:47.902 Initializing NVMe Controllers 00:10:47.902 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:47.902 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:47.902 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:47.902 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:47.902 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:47.902 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:47.902 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:47.902 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:47.902 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:47.902 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:47.902 Initialization complete. Launching workers. 00:10:47.902 ======================================================== 00:10:47.902 Latency(us) 00:10:47.902 Device Information : IOPS MiB/s Average min max 00:10:47.902 PCIE (0000:00:10.0) NSID 1 from core 0: 10719.31 125.62 11968.61 7565.55 45580.24 00:10:47.902 PCIE (0000:00:11.0) NSID 1 from core 0: 10719.31 125.62 11947.96 7612.56 43595.54 00:10:47.902 PCIE (0000:00:13.0) NSID 1 from core 0: 10719.31 125.62 11927.14 7636.24 43203.39 00:10:47.902 PCIE (0000:00:12.0) NSID 1 from core 0: 10719.31 125.62 11907.30 7621.40 42185.38 00:10:47.902 PCIE (0000:00:12.0) NSID 2 from core 0: 10719.31 125.62 11887.43 7581.12 40980.25 00:10:47.902 PCIE (0000:00:12.0) NSID 3 from core 0: 10719.31 125.62 11867.01 7578.73 39745.14 00:10:47.902 ======================================================== 00:10:47.902 Total : 64315.84 753.70 11917.58 7565.55 45580.24 00:10:47.902 00:10:47.902 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:47.902 ================================================================================= 00:10:47.902 1.00000% : 7948.543us 00:10:47.902 10.00000% : 8317.018us 00:10:47.902 25.00000% : 8843.412us 00:10:47.902 50.00000% : 10264.675us 00:10:47.902 75.00000% : 14423.184us 00:10:47.902 90.00000% : 17476.267us 00:10:47.902 95.00000% : 18739.611us 00:10:47.902 98.00000% : 20213.513us 00:10:47.902 99.00000% : 32425.844us 00:10:47.902 99.50000% : 43585.388us 00:10:47.902 99.90000% : 45269.847us 00:10:47.902 99.99000% : 45690.962us 00:10:47.902 99.99900% : 45690.962us 00:10:47.902 99.99990% : 45690.962us 00:10:47.902 99.99999% : 45690.962us 00:10:47.902 00:10:47.902 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:47.902 ================================================================================= 00:10:47.902 1.00000% : 7948.543us 00:10:47.902 10.00000% : 8317.018us 00:10:47.902 25.00000% : 8843.412us 00:10:47.902 50.00000% : 10264.675us 00:10:47.902 75.00000% : 14317.905us 00:10:47.902 90.00000% : 17476.267us 00:10:47.902 95.00000% : 18844.890us 00:10:47.902 98.00000% : 20318.792us 00:10:47.902 99.00000% : 32846.959us 00:10:47.902 99.50000% : 41900.929us 00:10:47.902 99.90000% : 43374.831us 00:10:47.902 99.99000% : 43585.388us 00:10:47.902 99.99900% : 43795.945us 00:10:47.902 99.99990% : 43795.945us 00:10:47.902 99.99999% : 43795.945us 00:10:47.902 00:10:47.902 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:47.902 ================================================================================= 00:10:47.902 1.00000% : 7948.543us 00:10:47.902 10.00000% : 8317.018us 00:10:47.902 25.00000% : 8790.773us 00:10:47.902 50.00000% : 10422.593us 00:10:47.902 75.00000% : 14002.069us 00:10:47.902 90.00000% : 17581.545us 00:10:47.902 95.00000% : 18634.333us 00:10:47.902 98.00000% : 19476.562us 00:10:47.902 99.00000% : 32215.287us 00:10:47.902 99.50000% : 41479.814us 00:10:47.902 99.90000% : 42953.716us 00:10:47.902 99.99000% : 43374.831us 00:10:47.902 99.99900% : 43374.831us 00:10:47.902 99.99990% : 43374.831us 00:10:47.902 99.99999% : 43374.831us 00:10:47.902 00:10:47.902 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:47.902 ================================================================================= 00:10:47.902 1.00000% : 7948.543us 00:10:47.902 10.00000% : 8317.018us 00:10:47.902 25.00000% : 8790.773us 00:10:47.902 50.00000% : 10527.871us 00:10:47.902 75.00000% : 14317.905us 00:10:47.902 90.00000% : 17686.824us 00:10:47.902 95.00000% : 18423.775us 00:10:47.902 98.00000% : 19266.005us 00:10:47.902 99.00000% : 30741.385us 00:10:47.902 99.50000% : 40427.027us 00:10:47.902 99.90000% : 41900.929us 00:10:47.902 99.99000% : 42322.043us 00:10:47.902 99.99900% : 42322.043us 00:10:47.902 99.99990% : 42322.043us 00:10:47.902 99.99999% : 42322.043us 00:10:47.902 00:10:47.902 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:47.902 ================================================================================= 00:10:47.902 1.00000% : 8001.182us 00:10:47.902 10.00000% : 8317.018us 00:10:47.902 25.00000% : 8790.773us 00:10:47.902 50.00000% : 10317.314us 00:10:47.903 75.00000% : 14423.184us 00:10:47.903 90.00000% : 17476.267us 00:10:47.903 95.00000% : 18423.775us 00:10:47.903 98.00000% : 19371.284us 00:10:47.903 99.00000% : 28846.368us 00:10:47.903 99.50000% : 39163.682us 00:10:47.903 99.90000% : 40848.141us 00:10:47.903 99.99000% : 41058.699us 00:10:47.903 99.99900% : 41058.699us 00:10:47.903 99.99990% : 41058.699us 00:10:47.903 99.99999% : 41058.699us 00:10:47.903 00:10:47.903 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:47.903 ================================================================================= 00:10:47.903 1.00000% : 8001.182us 00:10:47.903 10.00000% : 8369.658us 00:10:47.903 25.00000% : 8790.773us 00:10:47.903 50.00000% : 10264.675us 00:10:47.903 75.00000% : 14423.184us 00:10:47.903 90.00000% : 17476.267us 00:10:47.903 95.00000% : 18423.775us 00:10:47.903 98.00000% : 19687.120us 00:10:47.903 99.00000% : 27583.023us 00:10:47.903 99.50000% : 37900.337us 00:10:47.903 99.90000% : 39584.797us 00:10:47.903 99.99000% : 39795.354us 00:10:47.903 99.99900% : 39795.354us 00:10:47.903 99.99990% : 39795.354us 00:10:47.903 99.99999% : 39795.354us 00:10:47.903 00:10:47.903 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:47.903 ============================================================================== 00:10:47.903 Range in us Cumulative IO count 00:10:47.903 7527.428 - 7580.067: 0.0093% ( 1) 00:10:47.903 7632.707 - 7685.346: 0.0372% ( 3) 00:10:47.903 7685.346 - 7737.986: 0.1116% ( 8) 00:10:47.903 7737.986 - 7790.625: 0.2232% ( 12) 00:10:47.903 7790.625 - 7843.264: 0.5208% ( 32) 00:10:47.903 7843.264 - 7895.904: 0.9208% ( 43) 00:10:47.903 7895.904 - 7948.543: 1.4695% ( 59) 00:10:47.903 7948.543 - 8001.182: 2.2507% ( 84) 00:10:47.903 8001.182 - 8053.822: 3.3854% ( 122) 00:10:47.903 8053.822 - 8106.461: 4.7712% ( 149) 00:10:47.903 8106.461 - 8159.100: 6.1198% ( 145) 00:10:47.903 8159.100 - 8211.740: 7.6823% ( 168) 00:10:47.903 8211.740 - 8264.379: 9.6540% ( 212) 00:10:47.903 8264.379 - 8317.018: 11.3002% ( 177) 00:10:47.903 8317.018 - 8369.658: 13.0487% ( 188) 00:10:47.903 8369.658 - 8422.297: 14.8251% ( 191) 00:10:47.903 8422.297 - 8474.937: 17.2340% ( 259) 00:10:47.903 8474.937 - 8527.576: 19.1406% ( 205) 00:10:47.903 8527.576 - 8580.215: 20.3125% ( 126) 00:10:47.903 8580.215 - 8632.855: 21.5495% ( 133) 00:10:47.903 8632.855 - 8685.494: 22.6190% ( 115) 00:10:47.903 8685.494 - 8738.133: 23.7909% ( 126) 00:10:47.903 8738.133 - 8790.773: 24.9814% ( 128) 00:10:47.903 8790.773 - 8843.412: 26.1068% ( 121) 00:10:47.903 8843.412 - 8896.051: 27.4461% ( 144) 00:10:47.903 8896.051 - 8948.691: 28.5156% ( 115) 00:10:47.903 8948.691 - 9001.330: 29.2504% ( 79) 00:10:47.903 9001.330 - 9053.969: 30.2362% ( 106) 00:10:47.903 9053.969 - 9106.609: 31.1570% ( 99) 00:10:47.903 9106.609 - 9159.248: 32.0219% ( 93) 00:10:47.903 9159.248 - 9211.888: 32.7939% ( 83) 00:10:47.903 9211.888 - 9264.527: 34.0309% ( 133) 00:10:47.903 9264.527 - 9317.166: 35.2028% ( 126) 00:10:47.903 9317.166 - 9369.806: 36.1700% ( 104) 00:10:47.903 9369.806 - 9422.445: 37.2024% ( 111) 00:10:47.903 9422.445 - 9475.084: 38.0952% ( 96) 00:10:47.903 9475.084 - 9527.724: 39.2857% ( 128) 00:10:47.903 9527.724 - 9580.363: 40.3739% ( 117) 00:10:47.903 9580.363 - 9633.002: 41.3411% ( 104) 00:10:47.903 9633.002 - 9685.642: 42.4014% ( 114) 00:10:47.903 9685.642 - 9738.281: 43.5547% ( 124) 00:10:47.903 9738.281 - 9790.920: 44.5685% ( 109) 00:10:47.903 9790.920 - 9843.560: 45.7124% ( 123) 00:10:47.903 9843.560 - 9896.199: 46.3449% ( 68) 00:10:47.903 9896.199 - 9948.839: 46.9122% ( 61) 00:10:47.903 9948.839 - 10001.478: 47.5260% ( 66) 00:10:47.903 10001.478 - 10054.117: 48.1027% ( 62) 00:10:47.903 10054.117 - 10106.757: 48.5398% ( 47) 00:10:47.903 10106.757 - 10159.396: 49.0699% ( 57) 00:10:47.903 10159.396 - 10212.035: 49.6187% ( 59) 00:10:47.903 10212.035 - 10264.675: 50.0093% ( 42) 00:10:47.903 10264.675 - 10317.314: 50.3813% ( 40) 00:10:47.903 10317.314 - 10369.953: 50.8278% ( 48) 00:10:47.903 10369.953 - 10422.593: 51.1719% ( 37) 00:10:47.903 10422.593 - 10475.232: 51.3858% ( 23) 00:10:47.903 10475.232 - 10527.871: 51.8322% ( 48) 00:10:47.903 10527.871 - 10580.511: 52.5949% ( 82) 00:10:47.903 10580.511 - 10633.150: 53.1901% ( 64) 00:10:47.903 10633.150 - 10685.790: 53.6365% ( 48) 00:10:47.903 10685.790 - 10738.429: 54.2411% ( 65) 00:10:47.903 10738.429 - 10791.068: 54.8084% ( 61) 00:10:47.903 10791.068 - 10843.708: 55.4036% ( 64) 00:10:47.903 10843.708 - 10896.347: 55.9059% ( 54) 00:10:47.903 10896.347 - 10948.986: 56.3244% ( 45) 00:10:47.903 10948.986 - 11001.626: 56.6592% ( 36) 00:10:47.903 11001.626 - 11054.265: 57.1615% ( 54) 00:10:47.903 11054.265 - 11106.904: 57.4777% ( 34) 00:10:47.903 11106.904 - 11159.544: 57.8032% ( 35) 00:10:47.903 11159.544 - 11212.183: 58.0357% ( 25) 00:10:47.903 11212.183 - 11264.822: 58.2682% ( 25) 00:10:47.903 11264.822 - 11317.462: 58.6682% ( 43) 00:10:47.903 11317.462 - 11370.101: 59.0774% ( 44) 00:10:47.903 11370.101 - 11422.741: 59.4680% ( 42) 00:10:47.903 11422.741 - 11475.380: 59.7098% ( 26) 00:10:47.903 11475.380 - 11528.019: 59.9237% ( 23) 00:10:47.903 11528.019 - 11580.659: 60.3702% ( 48) 00:10:47.903 11580.659 - 11633.298: 60.5190% ( 16) 00:10:47.903 11633.298 - 11685.937: 60.7050% ( 20) 00:10:47.903 11685.937 - 11738.577: 60.9375% ( 25) 00:10:47.903 11738.577 - 11791.216: 61.1793% ( 26) 00:10:47.903 11791.216 - 11843.855: 61.4583% ( 30) 00:10:47.903 11843.855 - 11896.495: 61.8397% ( 41) 00:10:47.903 11896.495 - 11949.134: 62.1559% ( 34) 00:10:47.903 11949.134 - 12001.773: 62.4535% ( 32) 00:10:47.903 12001.773 - 12054.413: 62.6488% ( 21) 00:10:47.903 12054.413 - 12107.052: 63.0022% ( 38) 00:10:47.903 12107.052 - 12159.692: 63.3464% ( 37) 00:10:47.903 12159.692 - 12212.331: 63.8579% ( 55) 00:10:47.903 12212.331 - 12264.970: 64.3136% ( 49) 00:10:47.903 12264.970 - 12317.610: 64.6391% ( 35) 00:10:47.903 12317.610 - 12370.249: 64.8344% ( 21) 00:10:47.903 12370.249 - 12422.888: 65.2344% ( 43) 00:10:47.903 12422.888 - 12475.528: 65.5785% ( 37) 00:10:47.903 12475.528 - 12528.167: 65.9133% ( 36) 00:10:47.903 12528.167 - 12580.806: 66.1737% ( 28) 00:10:47.903 12580.806 - 12633.446: 66.3597% ( 20) 00:10:47.903 12633.446 - 12686.085: 66.5830% ( 24) 00:10:47.903 12686.085 - 12738.724: 66.8527% ( 29) 00:10:47.903 12738.724 - 12791.364: 67.1596% ( 33) 00:10:47.903 12791.364 - 12844.003: 67.4572% ( 32) 00:10:47.903 12844.003 - 12896.643: 67.7269% ( 29) 00:10:47.903 12896.643 - 12949.282: 68.0804% ( 38) 00:10:47.903 12949.282 - 13001.921: 68.4152% ( 36) 00:10:47.903 13001.921 - 13054.561: 68.7221% ( 33) 00:10:47.903 13054.561 - 13107.200: 68.9825% ( 28) 00:10:47.903 13107.200 - 13159.839: 69.2522% ( 29) 00:10:47.903 13159.839 - 13212.479: 69.5499% ( 32) 00:10:47.903 13212.479 - 13265.118: 69.7917% ( 26) 00:10:47.903 13265.118 - 13317.757: 70.0056% ( 23) 00:10:47.903 13317.757 - 13370.397: 70.1079% ( 11) 00:10:47.903 13370.397 - 13423.036: 70.3032% ( 21) 00:10:47.903 13423.036 - 13475.676: 70.4985% ( 21) 00:10:47.903 13475.676 - 13580.954: 70.8240% ( 35) 00:10:47.903 13580.954 - 13686.233: 71.1496% ( 35) 00:10:47.903 13686.233 - 13791.512: 71.4565% ( 33) 00:10:47.903 13791.512 - 13896.790: 71.9308% ( 51) 00:10:47.903 13896.790 - 14002.069: 72.6097% ( 73) 00:10:47.903 14002.069 - 14107.348: 73.5305% ( 99) 00:10:47.904 14107.348 - 14212.627: 74.2467% ( 77) 00:10:47.904 14212.627 - 14317.905: 74.8047% ( 60) 00:10:47.904 14317.905 - 14423.184: 75.2697% ( 50) 00:10:47.904 14423.184 - 14528.463: 75.7347% ( 50) 00:10:47.904 14528.463 - 14633.741: 76.2835% ( 59) 00:10:47.904 14633.741 - 14739.020: 76.5532% ( 29) 00:10:47.904 14739.020 - 14844.299: 76.9810% ( 46) 00:10:47.904 14844.299 - 14949.578: 77.6135% ( 68) 00:10:47.904 14949.578 - 15054.856: 78.2273% ( 66) 00:10:47.904 15054.856 - 15160.135: 78.7667% ( 58) 00:10:47.904 15160.135 - 15265.414: 79.2783% ( 55) 00:10:47.904 15265.414 - 15370.692: 79.7991% ( 56) 00:10:47.904 15370.692 - 15475.971: 80.3478% ( 59) 00:10:47.904 15475.971 - 15581.250: 81.1198% ( 83) 00:10:47.904 15581.250 - 15686.529: 81.6034% ( 52) 00:10:47.904 15686.529 - 15791.807: 82.2824% ( 73) 00:10:47.904 15791.807 - 15897.086: 82.6823% ( 43) 00:10:47.904 15897.086 - 16002.365: 83.0915% ( 44) 00:10:47.904 16002.365 - 16107.643: 83.5844% ( 53) 00:10:47.904 16107.643 - 16212.922: 84.1425% ( 60) 00:10:47.904 16212.922 - 16318.201: 84.7377% ( 64) 00:10:47.904 16318.201 - 16423.480: 85.2028% ( 50) 00:10:47.904 16423.480 - 16528.758: 85.7143% ( 55) 00:10:47.904 16528.758 - 16634.037: 86.3374% ( 67) 00:10:47.904 16634.037 - 16739.316: 86.9234% ( 63) 00:10:47.904 16739.316 - 16844.594: 87.3419% ( 45) 00:10:47.904 16844.594 - 16949.873: 87.7883% ( 48) 00:10:47.904 16949.873 - 17055.152: 88.3092% ( 56) 00:10:47.904 17055.152 - 17160.431: 88.7556% ( 48) 00:10:47.904 17160.431 - 17265.709: 89.2671% ( 55) 00:10:47.904 17265.709 - 17370.988: 89.8065% ( 58) 00:10:47.904 17370.988 - 17476.267: 90.2623% ( 49) 00:10:47.904 17476.267 - 17581.545: 90.6343% ( 40) 00:10:47.904 17581.545 - 17686.824: 91.0156% ( 41) 00:10:47.904 17686.824 - 17792.103: 91.6388% ( 67) 00:10:47.904 17792.103 - 17897.382: 92.0294% ( 42) 00:10:47.904 17897.382 - 18002.660: 92.3363% ( 33) 00:10:47.904 18002.660 - 18107.939: 92.6246% ( 31) 00:10:47.904 18107.939 - 18213.218: 92.9036% ( 30) 00:10:47.904 18213.218 - 18318.496: 93.1641% ( 28) 00:10:47.904 18318.496 - 18423.775: 93.4152% ( 27) 00:10:47.904 18423.775 - 18529.054: 93.9825% ( 61) 00:10:47.904 18529.054 - 18634.333: 94.6150% ( 68) 00:10:47.904 18634.333 - 18739.611: 95.0056% ( 42) 00:10:47.904 18739.611 - 18844.890: 95.2567% ( 27) 00:10:47.904 18844.890 - 18950.169: 95.3776% ( 13) 00:10:47.904 18950.169 - 19055.447: 95.4892% ( 12) 00:10:47.904 19055.447 - 19160.726: 95.7124% ( 24) 00:10:47.904 19160.726 - 19266.005: 95.9635% ( 27) 00:10:47.904 19266.005 - 19371.284: 96.2054% ( 26) 00:10:47.904 19371.284 - 19476.562: 96.3821% ( 19) 00:10:47.904 19476.562 - 19581.841: 96.5774% ( 21) 00:10:47.904 19581.841 - 19687.120: 96.8099% ( 25) 00:10:47.904 19687.120 - 19792.398: 96.9494% ( 15) 00:10:47.904 19792.398 - 19897.677: 97.1819% ( 25) 00:10:47.904 19897.677 - 20002.956: 97.4609% ( 30) 00:10:47.904 20002.956 - 20108.235: 97.7400% ( 30) 00:10:47.904 20108.235 - 20213.513: 98.0190% ( 30) 00:10:47.904 20213.513 - 20318.792: 98.1678% ( 16) 00:10:47.904 20318.792 - 20424.071: 98.2794% ( 12) 00:10:47.904 20424.071 - 20529.349: 98.4096% ( 14) 00:10:47.904 20529.349 - 20634.628: 98.5584% ( 16) 00:10:47.904 20634.628 - 20739.907: 98.6235% ( 7) 00:10:47.904 20739.907 - 20845.186: 98.6514% ( 3) 00:10:47.904 20845.186 - 20950.464: 98.6979% ( 5) 00:10:47.904 20950.464 - 21055.743: 98.7723% ( 8) 00:10:47.904 21055.743 - 21161.022: 98.8095% ( 4) 00:10:47.904 31794.172 - 32004.729: 98.8188% ( 1) 00:10:47.904 32004.729 - 32215.287: 98.9025% ( 9) 00:10:47.904 32215.287 - 32425.844: 99.0327% ( 14) 00:10:47.904 32425.844 - 32636.402: 99.1071% ( 8) 00:10:47.904 32636.402 - 32846.959: 99.1443% ( 4) 00:10:47.904 32846.959 - 33057.516: 99.1908% ( 5) 00:10:47.904 33057.516 - 33268.074: 99.2281% ( 4) 00:10:47.904 33268.074 - 33478.631: 99.2839% ( 6) 00:10:47.904 33478.631 - 33689.189: 99.3211% ( 4) 00:10:47.904 33689.189 - 33899.746: 99.3862% ( 7) 00:10:47.904 33899.746 - 34110.304: 99.4048% ( 2) 00:10:47.904 42953.716 - 43164.273: 99.4513% ( 5) 00:10:47.904 43164.273 - 43374.831: 99.4978% ( 5) 00:10:47.904 43374.831 - 43585.388: 99.5443% ( 5) 00:10:47.904 43585.388 - 43795.945: 99.5815% ( 4) 00:10:47.904 43795.945 - 44006.503: 99.6466% ( 7) 00:10:47.904 44006.503 - 44217.060: 99.6838% ( 4) 00:10:47.904 44217.060 - 44427.618: 99.7303% ( 5) 00:10:47.904 44427.618 - 44638.175: 99.7861% ( 6) 00:10:47.904 44638.175 - 44848.733: 99.8326% ( 5) 00:10:47.904 44848.733 - 45059.290: 99.8884% ( 6) 00:10:47.904 45059.290 - 45269.847: 99.9349% ( 5) 00:10:47.904 45269.847 - 45480.405: 99.9814% ( 5) 00:10:47.904 45480.405 - 45690.962: 100.0000% ( 2) 00:10:47.904 00:10:47.904 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:47.904 ============================================================================== 00:10:47.904 Range in us Cumulative IO count 00:10:47.904 7580.067 - 7632.707: 0.0093% ( 1) 00:10:47.904 7632.707 - 7685.346: 0.0465% ( 4) 00:10:47.904 7685.346 - 7737.986: 0.0837% ( 4) 00:10:47.904 7737.986 - 7790.625: 0.1767% ( 10) 00:10:47.904 7790.625 - 7843.264: 0.5580% ( 41) 00:10:47.904 7843.264 - 7895.904: 0.7812% ( 24) 00:10:47.904 7895.904 - 7948.543: 1.1998% ( 45) 00:10:47.904 7948.543 - 8001.182: 1.8508% ( 70) 00:10:47.904 8001.182 - 8053.822: 2.5205% ( 72) 00:10:47.904 8053.822 - 8106.461: 3.9156% ( 150) 00:10:47.904 8106.461 - 8159.100: 5.3292% ( 152) 00:10:47.904 8159.100 - 8211.740: 6.5848% ( 135) 00:10:47.904 8211.740 - 8264.379: 8.3612% ( 191) 00:10:47.904 8264.379 - 8317.018: 10.1097% ( 188) 00:10:47.904 8317.018 - 8369.658: 11.9420% ( 197) 00:10:47.904 8369.658 - 8422.297: 13.8393% ( 204) 00:10:47.904 8422.297 - 8474.937: 16.0993% ( 243) 00:10:47.904 8474.937 - 8527.576: 17.8757% ( 191) 00:10:47.904 8527.576 - 8580.215: 19.8010% ( 207) 00:10:47.904 8580.215 - 8632.855: 21.3542% ( 167) 00:10:47.904 8632.855 - 8685.494: 22.8702% ( 163) 00:10:47.904 8685.494 - 8738.133: 23.8839% ( 109) 00:10:47.904 8738.133 - 8790.773: 24.6466% ( 82) 00:10:47.904 8790.773 - 8843.412: 25.3627% ( 77) 00:10:47.904 8843.412 - 8896.051: 26.2091% ( 91) 00:10:47.904 8896.051 - 8948.691: 27.2786% ( 115) 00:10:47.904 8948.691 - 9001.330: 28.8039% ( 164) 00:10:47.904 9001.330 - 9053.969: 29.8921% ( 117) 00:10:47.904 9053.969 - 9106.609: 30.8966% ( 108) 00:10:47.904 9106.609 - 9159.248: 31.9103% ( 109) 00:10:47.904 9159.248 - 9211.888: 32.8218% ( 98) 00:10:47.904 9211.888 - 9264.527: 33.6031% ( 84) 00:10:47.904 9264.527 - 9317.166: 34.4587% ( 92) 00:10:47.904 9317.166 - 9369.806: 35.5190% ( 114) 00:10:47.904 9369.806 - 9422.445: 36.5885% ( 115) 00:10:47.904 9422.445 - 9475.084: 37.7697% ( 127) 00:10:47.904 9475.084 - 9527.724: 39.3508% ( 170) 00:10:47.904 9527.724 - 9580.363: 40.6064% ( 135) 00:10:47.904 9580.363 - 9633.002: 41.9550% ( 145) 00:10:47.904 9633.002 - 9685.642: 43.1641% ( 130) 00:10:47.904 9685.642 - 9738.281: 44.4847% ( 142) 00:10:47.904 9738.281 - 9790.920: 45.5171% ( 111) 00:10:47.904 9790.920 - 9843.560: 46.3263% ( 87) 00:10:47.904 9843.560 - 9896.199: 47.0052% ( 73) 00:10:47.904 9896.199 - 9948.839: 47.5446% ( 58) 00:10:47.904 9948.839 - 10001.478: 48.0190% ( 51) 00:10:47.904 10001.478 - 10054.117: 48.5026% ( 52) 00:10:47.904 10054.117 - 10106.757: 48.9676% ( 50) 00:10:47.904 10106.757 - 10159.396: 49.2932% ( 35) 00:10:47.904 10159.396 - 10212.035: 49.6745% ( 41) 00:10:47.904 10212.035 - 10264.675: 50.0465% ( 40) 00:10:47.904 10264.675 - 10317.314: 50.4557% ( 44) 00:10:47.904 10317.314 - 10369.953: 50.9673% ( 55) 00:10:47.904 10369.953 - 10422.593: 51.5532% ( 63) 00:10:47.904 10422.593 - 10475.232: 51.9996% ( 48) 00:10:47.904 10475.232 - 10527.871: 52.4182% ( 45) 00:10:47.904 10527.871 - 10580.511: 52.8832% ( 50) 00:10:47.904 10580.511 - 10633.150: 53.3389% ( 49) 00:10:47.904 10633.150 - 10685.790: 53.8597% ( 56) 00:10:47.904 10685.790 - 10738.429: 54.5201% ( 71) 00:10:47.904 10738.429 - 10791.068: 55.0688% ( 59) 00:10:47.904 10791.068 - 10843.708: 55.5246% ( 49) 00:10:47.904 10843.708 - 10896.347: 55.9524% ( 46) 00:10:47.904 10896.347 - 10948.986: 56.3337% ( 41) 00:10:47.904 10948.986 - 11001.626: 56.7894% ( 49) 00:10:47.904 11001.626 - 11054.265: 57.2638% ( 51) 00:10:47.904 11054.265 - 11106.904: 57.7009% ( 47) 00:10:47.904 11106.904 - 11159.544: 58.1101% ( 44) 00:10:47.904 11159.544 - 11212.183: 58.5100% ( 43) 00:10:47.904 11212.183 - 11264.822: 58.8263% ( 34) 00:10:47.904 11264.822 - 11317.462: 59.0867% ( 28) 00:10:47.904 11317.462 - 11370.101: 59.2541% ( 18) 00:10:47.904 11370.101 - 11422.741: 59.4494% ( 21) 00:10:47.904 11422.741 - 11475.380: 59.6447% ( 21) 00:10:47.904 11475.380 - 11528.019: 59.8958% ( 27) 00:10:47.904 11528.019 - 11580.659: 60.1656% ( 29) 00:10:47.904 11580.659 - 11633.298: 60.3888% ( 24) 00:10:47.904 11633.298 - 11685.937: 60.5376% ( 16) 00:10:47.904 11685.937 - 11738.577: 60.6771% ( 15) 00:10:47.904 11738.577 - 11791.216: 60.8631% ( 20) 00:10:47.904 11791.216 - 11843.855: 61.0305% ( 18) 00:10:47.905 11843.855 - 11896.495: 61.1142% ( 9) 00:10:47.905 11896.495 - 11949.134: 61.2630% ( 16) 00:10:47.905 11949.134 - 12001.773: 61.4955% ( 25) 00:10:47.905 12001.773 - 12054.413: 61.8304% ( 36) 00:10:47.905 12054.413 - 12107.052: 62.3233% ( 53) 00:10:47.905 12107.052 - 12159.692: 63.1045% ( 84) 00:10:47.905 12159.692 - 12212.331: 63.6347% ( 57) 00:10:47.905 12212.331 - 12264.970: 64.3322% ( 75) 00:10:47.905 12264.970 - 12317.610: 64.8438% ( 55) 00:10:47.905 12317.610 - 12370.249: 65.3181% ( 51) 00:10:47.905 12370.249 - 12422.888: 65.6529% ( 36) 00:10:47.905 12422.888 - 12475.528: 66.0528% ( 43) 00:10:47.905 12475.528 - 12528.167: 66.2667% ( 23) 00:10:47.905 12528.167 - 12580.806: 66.4435% ( 19) 00:10:47.905 12580.806 - 12633.446: 66.6667% ( 24) 00:10:47.905 12633.446 - 12686.085: 66.9085% ( 26) 00:10:47.905 12686.085 - 12738.724: 67.2340% ( 35) 00:10:47.905 12738.724 - 12791.364: 67.5223% ( 31) 00:10:47.905 12791.364 - 12844.003: 67.8664% ( 37) 00:10:47.905 12844.003 - 12896.643: 68.1827% ( 34) 00:10:47.905 12896.643 - 12949.282: 68.4989% ( 34) 00:10:47.905 12949.282 - 13001.921: 68.8151% ( 34) 00:10:47.905 13001.921 - 13054.561: 69.0383% ( 24) 00:10:47.905 13054.561 - 13107.200: 69.3266% ( 31) 00:10:47.905 13107.200 - 13159.839: 69.6429% ( 34) 00:10:47.905 13159.839 - 13212.479: 69.8568% ( 23) 00:10:47.905 13212.479 - 13265.118: 70.0335% ( 19) 00:10:47.905 13265.118 - 13317.757: 70.2102% ( 19) 00:10:47.905 13317.757 - 13370.397: 70.3869% ( 19) 00:10:47.905 13370.397 - 13423.036: 70.6101% ( 24) 00:10:47.905 13423.036 - 13475.676: 70.9170% ( 33) 00:10:47.905 13475.676 - 13580.954: 71.3170% ( 43) 00:10:47.905 13580.954 - 13686.233: 71.8006% ( 52) 00:10:47.905 13686.233 - 13791.512: 72.4237% ( 67) 00:10:47.905 13791.512 - 13896.790: 72.8795% ( 49) 00:10:47.905 13896.790 - 14002.069: 73.5770% ( 75) 00:10:47.905 14002.069 - 14107.348: 74.3490% ( 83) 00:10:47.905 14107.348 - 14212.627: 74.8047% ( 49) 00:10:47.905 14212.627 - 14317.905: 75.1674% ( 39) 00:10:47.905 14317.905 - 14423.184: 75.6231% ( 49) 00:10:47.905 14423.184 - 14528.463: 75.9859% ( 39) 00:10:47.905 14528.463 - 14633.741: 76.3858% ( 43) 00:10:47.905 14633.741 - 14739.020: 76.7020% ( 34) 00:10:47.905 14739.020 - 14844.299: 77.0461% ( 37) 00:10:47.905 14844.299 - 14949.578: 77.6972% ( 70) 00:10:47.905 14949.578 - 15054.856: 78.1994% ( 54) 00:10:47.905 15054.856 - 15160.135: 78.8039% ( 65) 00:10:47.905 15160.135 - 15265.414: 79.3713% ( 61) 00:10:47.905 15265.414 - 15370.692: 80.0595% ( 74) 00:10:47.905 15370.692 - 15475.971: 80.5339% ( 51) 00:10:47.905 15475.971 - 15581.250: 80.9617% ( 46) 00:10:47.905 15581.250 - 15686.529: 81.4360% ( 51) 00:10:47.905 15686.529 - 15791.807: 82.0406% ( 65) 00:10:47.905 15791.807 - 15897.086: 82.5521% ( 55) 00:10:47.905 15897.086 - 16002.365: 83.2310% ( 73) 00:10:47.905 16002.365 - 16107.643: 83.9472% ( 77) 00:10:47.905 16107.643 - 16212.922: 84.8772% ( 100) 00:10:47.905 16212.922 - 16318.201: 85.4167% ( 58) 00:10:47.905 16318.201 - 16423.480: 86.0119% ( 64) 00:10:47.905 16423.480 - 16528.758: 86.3653% ( 38) 00:10:47.905 16528.758 - 16634.037: 86.7094% ( 37) 00:10:47.905 16634.037 - 16739.316: 87.2117% ( 54) 00:10:47.905 16739.316 - 16844.594: 87.6395% ( 46) 00:10:47.905 16844.594 - 16949.873: 88.1231% ( 52) 00:10:47.905 16949.873 - 17055.152: 88.5975% ( 51) 00:10:47.905 17055.152 - 17160.431: 89.0904% ( 53) 00:10:47.905 17160.431 - 17265.709: 89.4345% ( 37) 00:10:47.905 17265.709 - 17370.988: 89.7042% ( 29) 00:10:47.905 17370.988 - 17476.267: 90.1879% ( 52) 00:10:47.905 17476.267 - 17581.545: 90.4576% ( 29) 00:10:47.905 17581.545 - 17686.824: 90.7924% ( 36) 00:10:47.905 17686.824 - 17792.103: 91.1365% ( 37) 00:10:47.905 17792.103 - 17897.382: 91.4249% ( 31) 00:10:47.905 17897.382 - 18002.660: 91.8155% ( 42) 00:10:47.905 18002.660 - 18107.939: 92.2433% ( 46) 00:10:47.905 18107.939 - 18213.218: 92.5595% ( 34) 00:10:47.905 18213.218 - 18318.496: 92.9315% ( 40) 00:10:47.905 18318.496 - 18423.775: 93.4338% ( 54) 00:10:47.905 18423.775 - 18529.054: 94.0104% ( 62) 00:10:47.905 18529.054 - 18634.333: 94.4382% ( 46) 00:10:47.905 18634.333 - 18739.611: 94.8010% ( 39) 00:10:47.905 18739.611 - 18844.890: 95.2846% ( 52) 00:10:47.905 18844.890 - 18950.169: 95.7310% ( 48) 00:10:47.905 18950.169 - 19055.447: 96.1124% ( 41) 00:10:47.905 19055.447 - 19160.726: 96.4286% ( 34) 00:10:47.905 19160.726 - 19266.005: 96.6983% ( 29) 00:10:47.905 19266.005 - 19371.284: 96.9215% ( 24) 00:10:47.905 19371.284 - 19476.562: 97.1447% ( 24) 00:10:47.905 19476.562 - 19581.841: 97.2935% ( 16) 00:10:47.905 19581.841 - 19687.120: 97.4051% ( 12) 00:10:47.905 19687.120 - 19792.398: 97.4888% ( 9) 00:10:47.905 19792.398 - 19897.677: 97.5725% ( 9) 00:10:47.905 19897.677 - 20002.956: 97.6004% ( 3) 00:10:47.905 20002.956 - 20108.235: 97.7865% ( 20) 00:10:47.905 20108.235 - 20213.513: 97.8609% ( 8) 00:10:47.905 20213.513 - 20318.792: 98.0097% ( 16) 00:10:47.905 20318.792 - 20424.071: 98.1213% ( 12) 00:10:47.905 20424.071 - 20529.349: 98.1957% ( 8) 00:10:47.905 20529.349 - 20634.628: 98.2608% ( 7) 00:10:47.905 20634.628 - 20739.907: 98.3259% ( 7) 00:10:47.905 20739.907 - 20845.186: 98.3910% ( 7) 00:10:47.905 20845.186 - 20950.464: 98.4747% ( 9) 00:10:47.905 20950.464 - 21055.743: 98.5584% ( 9) 00:10:47.905 21055.743 - 21161.022: 98.6421% ( 9) 00:10:47.905 21161.022 - 21266.300: 98.7258% ( 9) 00:10:47.905 21266.300 - 21371.579: 98.7723% ( 5) 00:10:47.905 21371.579 - 21476.858: 98.8095% ( 4) 00:10:47.905 31794.172 - 32004.729: 98.8281% ( 2) 00:10:47.905 32004.729 - 32215.287: 98.8839% ( 6) 00:10:47.905 32215.287 - 32425.844: 98.9397% ( 6) 00:10:47.905 32425.844 - 32636.402: 98.9955% ( 6) 00:10:47.905 32636.402 - 32846.959: 99.0606% ( 7) 00:10:47.905 32846.959 - 33057.516: 99.1164% ( 6) 00:10:47.905 33057.516 - 33268.074: 99.1815% ( 7) 00:10:47.905 33268.074 - 33478.631: 99.2374% ( 6) 00:10:47.905 33478.631 - 33689.189: 99.3025% ( 7) 00:10:47.905 33689.189 - 33899.746: 99.3583% ( 6) 00:10:47.905 33899.746 - 34110.304: 99.4048% ( 5) 00:10:47.905 41269.256 - 41479.814: 99.4420% ( 4) 00:10:47.905 41479.814 - 41690.371: 99.4978% ( 6) 00:10:47.905 41690.371 - 41900.929: 99.5536% ( 6) 00:10:47.905 41900.929 - 42111.486: 99.6094% ( 6) 00:10:47.905 42111.486 - 42322.043: 99.6652% ( 6) 00:10:47.905 42322.043 - 42532.601: 99.7117% ( 5) 00:10:47.905 42532.601 - 42743.158: 99.7768% ( 7) 00:10:47.905 42743.158 - 42953.716: 99.8233% ( 5) 00:10:47.905 42953.716 - 43164.273: 99.8791% ( 6) 00:10:47.905 43164.273 - 43374.831: 99.9349% ( 6) 00:10:47.905 43374.831 - 43585.388: 99.9907% ( 6) 00:10:47.905 43585.388 - 43795.945: 100.0000% ( 1) 00:10:47.905 00:10:47.905 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:47.905 ============================================================================== 00:10:47.905 Range in us Cumulative IO count 00:10:47.905 7632.707 - 7685.346: 0.0186% ( 2) 00:10:47.905 7685.346 - 7737.986: 0.0372% ( 2) 00:10:47.905 7737.986 - 7790.625: 0.1488% ( 12) 00:10:47.905 7790.625 - 7843.264: 0.4464% ( 32) 00:10:47.905 7843.264 - 7895.904: 0.8836% ( 47) 00:10:47.905 7895.904 - 7948.543: 1.6183% ( 79) 00:10:47.905 7948.543 - 8001.182: 2.4182% ( 86) 00:10:47.905 8001.182 - 8053.822: 3.2645% ( 91) 00:10:47.905 8053.822 - 8106.461: 4.2132% ( 102) 00:10:47.905 8106.461 - 8159.100: 5.3013% ( 117) 00:10:47.905 8159.100 - 8211.740: 6.8731% ( 169) 00:10:47.905 8211.740 - 8264.379: 8.6961% ( 196) 00:10:47.906 8264.379 - 8317.018: 10.6771% ( 213) 00:10:47.906 8317.018 - 8369.658: 12.7139% ( 219) 00:10:47.906 8369.658 - 8422.297: 15.0670% ( 253) 00:10:47.906 8422.297 - 8474.937: 17.3177% ( 242) 00:10:47.906 8474.937 - 8527.576: 19.3080% ( 214) 00:10:47.906 8527.576 - 8580.215: 21.0844% ( 191) 00:10:47.906 8580.215 - 8632.855: 22.4237% ( 144) 00:10:47.906 8632.855 - 8685.494: 23.3631% ( 101) 00:10:47.906 8685.494 - 8738.133: 24.0327% ( 72) 00:10:47.906 8738.133 - 8790.773: 25.1209% ( 117) 00:10:47.906 8790.773 - 8843.412: 25.9115% ( 85) 00:10:47.906 8843.412 - 8896.051: 26.9066% ( 107) 00:10:47.906 8896.051 - 8948.691: 27.9948% ( 117) 00:10:47.906 8948.691 - 9001.330: 29.1295% ( 122) 00:10:47.906 9001.330 - 9053.969: 30.7664% ( 176) 00:10:47.906 9053.969 - 9106.609: 31.6034% ( 90) 00:10:47.906 9106.609 - 9159.248: 32.3568% ( 81) 00:10:47.906 9159.248 - 9211.888: 33.1194% ( 82) 00:10:47.906 9211.888 - 9264.527: 33.9844% ( 93) 00:10:47.906 9264.527 - 9317.166: 34.8679% ( 95) 00:10:47.906 9317.166 - 9369.806: 35.8631% ( 107) 00:10:47.906 9369.806 - 9422.445: 37.0071% ( 123) 00:10:47.906 9422.445 - 9475.084: 37.9650% ( 103) 00:10:47.906 9475.084 - 9527.724: 39.4345% ( 158) 00:10:47.906 9527.724 - 9580.363: 40.4297% ( 107) 00:10:47.906 9580.363 - 9633.002: 41.3225% ( 96) 00:10:47.906 9633.002 - 9685.642: 42.6711% ( 145) 00:10:47.906 9685.642 - 9738.281: 43.5547% ( 95) 00:10:47.906 9738.281 - 9790.920: 44.3080% ( 81) 00:10:47.906 9790.920 - 9843.560: 45.2288% ( 99) 00:10:47.906 9843.560 - 9896.199: 45.8147% ( 63) 00:10:47.906 9896.199 - 9948.839: 46.6239% ( 87) 00:10:47.906 9948.839 - 10001.478: 47.0238% ( 43) 00:10:47.906 10001.478 - 10054.117: 47.3493% ( 35) 00:10:47.906 10054.117 - 10106.757: 47.7493% ( 43) 00:10:47.906 10106.757 - 10159.396: 48.1027% ( 38) 00:10:47.906 10159.396 - 10212.035: 48.4747% ( 40) 00:10:47.906 10212.035 - 10264.675: 49.1071% ( 68) 00:10:47.906 10264.675 - 10317.314: 49.5350% ( 46) 00:10:47.906 10317.314 - 10369.953: 49.9349% ( 43) 00:10:47.906 10369.953 - 10422.593: 50.3534% ( 45) 00:10:47.906 10422.593 - 10475.232: 50.7254% ( 40) 00:10:47.906 10475.232 - 10527.871: 51.1533% ( 46) 00:10:47.906 10527.871 - 10580.511: 51.5439% ( 42) 00:10:47.906 10580.511 - 10633.150: 51.8601% ( 34) 00:10:47.906 10633.150 - 10685.790: 52.2135% ( 38) 00:10:47.906 10685.790 - 10738.429: 53.0692% ( 92) 00:10:47.906 10738.429 - 10791.068: 53.5714% ( 54) 00:10:47.906 10791.068 - 10843.708: 54.2690% ( 75) 00:10:47.906 10843.708 - 10896.347: 54.7805% ( 55) 00:10:47.906 10896.347 - 10948.986: 55.1432% ( 39) 00:10:47.906 10948.986 - 11001.626: 55.4408% ( 32) 00:10:47.906 11001.626 - 11054.265: 55.8129% ( 40) 00:10:47.906 11054.265 - 11106.904: 56.3058% ( 53) 00:10:47.906 11106.904 - 11159.544: 56.7336% ( 46) 00:10:47.906 11159.544 - 11212.183: 56.9940% ( 28) 00:10:47.906 11212.183 - 11264.822: 57.2266% ( 25) 00:10:47.906 11264.822 - 11317.462: 57.4405% ( 23) 00:10:47.906 11317.462 - 11370.101: 57.7288% ( 31) 00:10:47.906 11370.101 - 11422.741: 57.9334% ( 22) 00:10:47.906 11422.741 - 11475.380: 58.1473% ( 23) 00:10:47.906 11475.380 - 11528.019: 58.4635% ( 34) 00:10:47.906 11528.019 - 11580.659: 58.8914% ( 46) 00:10:47.906 11580.659 - 11633.298: 59.5052% ( 66) 00:10:47.906 11633.298 - 11685.937: 60.1562% ( 70) 00:10:47.906 11685.937 - 11738.577: 60.6864% ( 57) 00:10:47.906 11738.577 - 11791.216: 61.2072% ( 56) 00:10:47.906 11791.216 - 11843.855: 61.7094% ( 54) 00:10:47.906 11843.855 - 11896.495: 62.1280% ( 45) 00:10:47.906 11896.495 - 11949.134: 62.5372% ( 44) 00:10:47.906 11949.134 - 12001.773: 63.0487% ( 55) 00:10:47.906 12001.773 - 12054.413: 63.5975% ( 59) 00:10:47.906 12054.413 - 12107.052: 63.9788% ( 41) 00:10:47.906 12107.052 - 12159.692: 64.2671% ( 31) 00:10:47.906 12159.692 - 12212.331: 64.5182% ( 27) 00:10:47.906 12212.331 - 12264.970: 64.9833% ( 50) 00:10:47.906 12264.970 - 12317.610: 65.3274% ( 37) 00:10:47.906 12317.610 - 12370.249: 65.4948% ( 18) 00:10:47.906 12370.249 - 12422.888: 65.6064% ( 12) 00:10:47.906 12422.888 - 12475.528: 65.7273% ( 13) 00:10:47.906 12475.528 - 12528.167: 65.8482% ( 13) 00:10:47.906 12528.167 - 12580.806: 66.0249% ( 19) 00:10:47.906 12580.806 - 12633.446: 66.2016% ( 19) 00:10:47.906 12633.446 - 12686.085: 66.3876% ( 20) 00:10:47.906 12686.085 - 12738.724: 66.6481% ( 28) 00:10:47.906 12738.724 - 12791.364: 67.0573% ( 44) 00:10:47.906 12791.364 - 12844.003: 67.3270% ( 29) 00:10:47.906 12844.003 - 12896.643: 67.5781% ( 27) 00:10:47.906 12896.643 - 12949.282: 67.9036% ( 35) 00:10:47.906 12949.282 - 13001.921: 68.1269% ( 24) 00:10:47.906 13001.921 - 13054.561: 68.4152% ( 31) 00:10:47.906 13054.561 - 13107.200: 68.6477% ( 25) 00:10:47.906 13107.200 - 13159.839: 68.9825% ( 36) 00:10:47.906 13159.839 - 13212.479: 69.3173% ( 36) 00:10:47.906 13212.479 - 13265.118: 69.7173% ( 43) 00:10:47.906 13265.118 - 13317.757: 70.0521% ( 36) 00:10:47.906 13317.757 - 13370.397: 70.4148% ( 39) 00:10:47.906 13370.397 - 13423.036: 70.9542% ( 58) 00:10:47.906 13423.036 - 13475.676: 71.3728% ( 45) 00:10:47.906 13475.676 - 13580.954: 72.1633% ( 85) 00:10:47.906 13580.954 - 13686.233: 72.7307% ( 61) 00:10:47.906 13686.233 - 13791.512: 73.3631% ( 68) 00:10:47.906 13791.512 - 13896.790: 74.0885% ( 78) 00:10:47.906 13896.790 - 14002.069: 75.2046% ( 120) 00:10:47.906 14002.069 - 14107.348: 75.6603% ( 49) 00:10:47.906 14107.348 - 14212.627: 76.0138% ( 38) 00:10:47.906 14212.627 - 14317.905: 76.3579% ( 37) 00:10:47.906 14317.905 - 14423.184: 76.6462% ( 31) 00:10:47.906 14423.184 - 14528.463: 76.8415% ( 21) 00:10:47.906 14528.463 - 14633.741: 76.9996% ( 17) 00:10:47.906 14633.741 - 14739.020: 77.2972% ( 32) 00:10:47.906 14739.020 - 14844.299: 77.5577% ( 28) 00:10:47.906 14844.299 - 14949.578: 77.9204% ( 39) 00:10:47.906 14949.578 - 15054.856: 78.3296% ( 44) 00:10:47.906 15054.856 - 15160.135: 78.7202% ( 42) 00:10:47.906 15160.135 - 15265.414: 79.0923% ( 40) 00:10:47.906 15265.414 - 15370.692: 79.5108% ( 45) 00:10:47.906 15370.692 - 15475.971: 80.2827% ( 83) 00:10:47.906 15475.971 - 15581.250: 81.0082% ( 78) 00:10:47.906 15581.250 - 15686.529: 81.7708% ( 82) 00:10:47.906 15686.529 - 15791.807: 82.6544% ( 95) 00:10:47.906 15791.807 - 15897.086: 83.3147% ( 71) 00:10:47.906 15897.086 - 16002.365: 83.8170% ( 54) 00:10:47.906 16002.365 - 16107.643: 84.3471% ( 57) 00:10:47.906 16107.643 - 16212.922: 84.5610% ( 23) 00:10:47.906 16212.922 - 16318.201: 84.7377% ( 19) 00:10:47.906 16318.201 - 16423.480: 84.9423% ( 22) 00:10:47.906 16423.480 - 16528.758: 85.2307% ( 31) 00:10:47.906 16528.758 - 16634.037: 85.6306% ( 43) 00:10:47.906 16634.037 - 16739.316: 86.1514% ( 56) 00:10:47.906 16739.316 - 16844.594: 86.5792% ( 46) 00:10:47.906 16844.594 - 16949.873: 87.0257% ( 48) 00:10:47.906 16949.873 - 17055.152: 87.3884% ( 39) 00:10:47.906 17055.152 - 17160.431: 87.7604% ( 40) 00:10:47.906 17160.431 - 17265.709: 88.2626% ( 54) 00:10:47.906 17265.709 - 17370.988: 88.9323% ( 72) 00:10:47.906 17370.988 - 17476.267: 89.5089% ( 62) 00:10:47.906 17476.267 - 17581.545: 90.1786% ( 72) 00:10:47.906 17581.545 - 17686.824: 90.7366% ( 60) 00:10:47.906 17686.824 - 17792.103: 91.3876% ( 70) 00:10:47.906 17792.103 - 17897.382: 91.8899% ( 54) 00:10:47.906 17897.382 - 18002.660: 92.4851% ( 64) 00:10:47.906 18002.660 - 18107.939: 92.9874% ( 54) 00:10:47.906 18107.939 - 18213.218: 93.7593% ( 83) 00:10:47.906 18213.218 - 18318.496: 94.2987% ( 58) 00:10:47.906 18318.496 - 18423.775: 94.7452% ( 48) 00:10:47.906 18423.775 - 18529.054: 94.9684% ( 24) 00:10:47.906 18529.054 - 18634.333: 95.2474% ( 30) 00:10:47.906 18634.333 - 18739.611: 95.6008% ( 38) 00:10:47.906 18739.611 - 18844.890: 95.9728% ( 40) 00:10:47.906 18844.890 - 18950.169: 96.4007% ( 46) 00:10:47.906 18950.169 - 19055.447: 97.1075% ( 76) 00:10:47.906 19055.447 - 19160.726: 97.4423% ( 36) 00:10:47.906 19160.726 - 19266.005: 97.7214% ( 30) 00:10:47.906 19266.005 - 19371.284: 97.9632% ( 26) 00:10:47.906 19371.284 - 19476.562: 98.1678% ( 22) 00:10:47.906 19476.562 - 19581.841: 98.2887% ( 13) 00:10:47.906 19581.841 - 19687.120: 98.4003% ( 12) 00:10:47.906 19687.120 - 19792.398: 98.5305% ( 14) 00:10:47.906 19792.398 - 19897.677: 98.6049% ( 8) 00:10:47.906 19897.677 - 20002.956: 98.6328% ( 3) 00:10:47.906 20002.956 - 20108.235: 98.6607% ( 3) 00:10:47.906 20108.235 - 20213.513: 98.6886% ( 3) 00:10:47.906 20213.513 - 20318.792: 98.7165% ( 3) 00:10:47.906 20318.792 - 20424.071: 98.7444% ( 3) 00:10:47.906 20424.071 - 20529.349: 98.7909% ( 5) 00:10:47.906 20529.349 - 20634.628: 98.8095% ( 2) 00:10:47.906 31373.057 - 31583.614: 98.8467% ( 4) 00:10:47.906 31583.614 - 31794.172: 98.9118% ( 7) 00:10:47.906 31794.172 - 32004.729: 98.9769% ( 7) 00:10:47.906 32004.729 - 32215.287: 99.0327% ( 6) 00:10:47.906 32215.287 - 32425.844: 99.0978% ( 7) 00:10:47.906 32425.844 - 32636.402: 99.1536% ( 6) 00:10:47.906 32636.402 - 32846.959: 99.2188% ( 7) 00:10:47.907 32846.959 - 33057.516: 99.2839% ( 7) 00:10:47.907 33057.516 - 33268.074: 99.3397% ( 6) 00:10:47.907 33268.074 - 33478.631: 99.4048% ( 7) 00:10:47.907 40848.141 - 41058.699: 99.4327% ( 3) 00:10:47.907 41058.699 - 41269.256: 99.4885% ( 6) 00:10:47.907 41269.256 - 41479.814: 99.5443% ( 6) 00:10:47.907 41479.814 - 41690.371: 99.6001% ( 6) 00:10:47.907 41690.371 - 41900.929: 99.6466% ( 5) 00:10:47.907 41900.929 - 42111.486: 99.7024% ( 6) 00:10:47.907 42111.486 - 42322.043: 99.7675% ( 7) 00:10:47.907 42322.043 - 42532.601: 99.8140% ( 5) 00:10:47.907 42532.601 - 42743.158: 99.8698% ( 6) 00:10:47.907 42743.158 - 42953.716: 99.9256% ( 6) 00:10:47.907 42953.716 - 43164.273: 99.9814% ( 6) 00:10:47.907 43164.273 - 43374.831: 100.0000% ( 2) 00:10:47.907 00:10:47.907 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:47.907 ============================================================================== 00:10:47.907 Range in us Cumulative IO count 00:10:47.907 7580.067 - 7632.707: 0.0093% ( 1) 00:10:47.907 7632.707 - 7685.346: 0.0186% ( 1) 00:10:47.907 7685.346 - 7737.986: 0.0558% ( 4) 00:10:47.907 7737.986 - 7790.625: 0.1860% ( 14) 00:10:47.907 7790.625 - 7843.264: 0.5208% ( 36) 00:10:47.907 7843.264 - 7895.904: 0.9580% ( 47) 00:10:47.907 7895.904 - 7948.543: 1.6741% ( 77) 00:10:47.907 7948.543 - 8001.182: 2.4833% ( 87) 00:10:47.907 8001.182 - 8053.822: 3.4784% ( 107) 00:10:47.907 8053.822 - 8106.461: 4.7061% ( 132) 00:10:47.907 8106.461 - 8159.100: 6.0361% ( 143) 00:10:47.907 8159.100 - 8211.740: 7.5614% ( 164) 00:10:47.907 8211.740 - 8264.379: 9.0123% ( 156) 00:10:47.907 8264.379 - 8317.018: 10.7236% ( 184) 00:10:47.907 8317.018 - 8369.658: 12.5837% ( 200) 00:10:47.907 8369.658 - 8422.297: 14.3787% ( 193) 00:10:47.907 8422.297 - 8474.937: 16.2760% ( 204) 00:10:47.907 8474.937 - 8527.576: 18.3222% ( 220) 00:10:47.907 8527.576 - 8580.215: 20.0986% ( 191) 00:10:47.907 8580.215 - 8632.855: 21.4193% ( 142) 00:10:47.907 8632.855 - 8685.494: 22.7121% ( 139) 00:10:47.907 8685.494 - 8738.133: 24.1443% ( 154) 00:10:47.907 8738.133 - 8790.773: 25.1395% ( 107) 00:10:47.907 8790.773 - 8843.412: 26.4230% ( 138) 00:10:47.907 8843.412 - 8896.051: 27.5670% ( 123) 00:10:47.907 8896.051 - 8948.691: 28.4784% ( 98) 00:10:47.907 8948.691 - 9001.330: 29.4550% ( 105) 00:10:47.907 9001.330 - 9053.969: 30.7292% ( 137) 00:10:47.907 9053.969 - 9106.609: 31.8359% ( 119) 00:10:47.907 9106.609 - 9159.248: 32.8683% ( 111) 00:10:47.907 9159.248 - 9211.888: 33.8077% ( 101) 00:10:47.907 9211.888 - 9264.527: 34.7005% ( 96) 00:10:47.907 9264.527 - 9317.166: 35.4446% ( 80) 00:10:47.907 9317.166 - 9369.806: 36.2909% ( 91) 00:10:47.907 9369.806 - 9422.445: 37.2582% ( 104) 00:10:47.907 9422.445 - 9475.084: 38.3371% ( 116) 00:10:47.907 9475.084 - 9527.724: 39.5089% ( 126) 00:10:47.907 9527.724 - 9580.363: 40.7552% ( 134) 00:10:47.907 9580.363 - 9633.002: 41.8527% ( 118) 00:10:47.907 9633.002 - 9685.642: 42.7827% ( 100) 00:10:47.907 9685.642 - 9738.281: 43.6477% ( 93) 00:10:47.907 9738.281 - 9790.920: 44.4940% ( 91) 00:10:47.907 9790.920 - 9843.560: 45.2381% ( 80) 00:10:47.907 9843.560 - 9896.199: 46.0658% ( 89) 00:10:47.907 9896.199 - 9948.839: 46.5867% ( 56) 00:10:47.907 9948.839 - 10001.478: 46.9959% ( 44) 00:10:47.907 10001.478 - 10054.117: 47.3865% ( 42) 00:10:47.907 10054.117 - 10106.757: 48.0097% ( 67) 00:10:47.907 10106.757 - 10159.396: 48.5863% ( 62) 00:10:47.907 10159.396 - 10212.035: 48.9490% ( 39) 00:10:47.907 10212.035 - 10264.675: 49.1722% ( 24) 00:10:47.907 10264.675 - 10317.314: 49.3676% ( 21) 00:10:47.907 10317.314 - 10369.953: 49.5164% ( 16) 00:10:47.907 10369.953 - 10422.593: 49.6559% ( 15) 00:10:47.907 10422.593 - 10475.232: 49.9907% ( 36) 00:10:47.907 10475.232 - 10527.871: 50.4185% ( 46) 00:10:47.907 10527.871 - 10580.511: 51.2370% ( 88) 00:10:47.907 10580.511 - 10633.150: 51.9996% ( 82) 00:10:47.907 10633.150 - 10685.790: 52.8274% ( 89) 00:10:47.907 10685.790 - 10738.429: 53.3110% ( 52) 00:10:47.907 10738.429 - 10791.068: 53.7202% ( 44) 00:10:47.907 10791.068 - 10843.708: 54.0458% ( 35) 00:10:47.907 10843.708 - 10896.347: 54.5294% ( 52) 00:10:47.907 10896.347 - 10948.986: 54.9665% ( 47) 00:10:47.907 10948.986 - 11001.626: 55.2920% ( 35) 00:10:47.907 11001.626 - 11054.265: 55.7106% ( 45) 00:10:47.907 11054.265 - 11106.904: 56.0082% ( 32) 00:10:47.907 11106.904 - 11159.544: 56.2500% ( 26) 00:10:47.907 11159.544 - 11212.183: 56.7336% ( 52) 00:10:47.907 11212.183 - 11264.822: 57.2266% ( 53) 00:10:47.907 11264.822 - 11317.462: 57.4963% ( 29) 00:10:47.907 11317.462 - 11370.101: 57.7288% ( 25) 00:10:47.907 11370.101 - 11422.741: 58.1380% ( 44) 00:10:47.907 11422.741 - 11475.380: 58.5286% ( 42) 00:10:47.907 11475.380 - 11528.019: 58.8449% ( 34) 00:10:47.907 11528.019 - 11580.659: 59.1704% ( 35) 00:10:47.907 11580.659 - 11633.298: 59.4773% ( 33) 00:10:47.907 11633.298 - 11685.937: 59.9702% ( 53) 00:10:47.907 11685.937 - 11738.577: 60.3702% ( 43) 00:10:47.907 11738.577 - 11791.216: 60.6771% ( 33) 00:10:47.907 11791.216 - 11843.855: 60.9840% ( 33) 00:10:47.907 11843.855 - 11896.495: 61.2909% ( 33) 00:10:47.907 11896.495 - 11949.134: 61.6908% ( 43) 00:10:47.907 11949.134 - 12001.773: 62.0536% ( 39) 00:10:47.907 12001.773 - 12054.413: 62.3512% ( 32) 00:10:47.907 12054.413 - 12107.052: 62.6860% ( 36) 00:10:47.907 12107.052 - 12159.692: 62.9650% ( 30) 00:10:47.907 12159.692 - 12212.331: 63.1882% ( 24) 00:10:47.907 12212.331 - 12264.970: 63.4394% ( 27) 00:10:47.907 12264.970 - 12317.610: 63.7091% ( 29) 00:10:47.907 12317.610 - 12370.249: 63.9881% ( 30) 00:10:47.907 12370.249 - 12422.888: 64.3415% ( 38) 00:10:47.907 12422.888 - 12475.528: 64.6298% ( 31) 00:10:47.907 12475.528 - 12528.167: 65.2158% ( 63) 00:10:47.907 12528.167 - 12580.806: 65.5878% ( 40) 00:10:47.907 12580.806 - 12633.446: 66.1272% ( 58) 00:10:47.907 12633.446 - 12686.085: 66.8062% ( 73) 00:10:47.907 12686.085 - 12738.724: 67.6246% ( 88) 00:10:47.907 12738.724 - 12791.364: 67.9967% ( 40) 00:10:47.907 12791.364 - 12844.003: 68.4059% ( 44) 00:10:47.907 12844.003 - 12896.643: 68.7035% ( 32) 00:10:47.907 12896.643 - 12949.282: 69.0104% ( 33) 00:10:47.907 12949.282 - 13001.921: 69.3731% ( 39) 00:10:47.907 13001.921 - 13054.561: 69.6801% ( 33) 00:10:47.907 13054.561 - 13107.200: 69.9498% ( 29) 00:10:47.907 13107.200 - 13159.839: 70.1730% ( 24) 00:10:47.907 13159.839 - 13212.479: 70.3962% ( 24) 00:10:47.907 13212.479 - 13265.118: 70.5915% ( 21) 00:10:47.907 13265.118 - 13317.757: 70.8147% ( 24) 00:10:47.907 13317.757 - 13370.397: 71.1868% ( 40) 00:10:47.907 13370.397 - 13423.036: 71.4007% ( 23) 00:10:47.907 13423.036 - 13475.676: 71.8471% ( 48) 00:10:47.907 13475.676 - 13580.954: 72.1633% ( 34) 00:10:47.907 13580.954 - 13686.233: 72.4516% ( 31) 00:10:47.907 13686.233 - 13791.512: 72.9260% ( 51) 00:10:47.907 13791.512 - 13896.790: 73.4096% ( 52) 00:10:47.907 13896.790 - 14002.069: 74.0048% ( 64) 00:10:47.907 14002.069 - 14107.348: 74.4885% ( 52) 00:10:47.907 14107.348 - 14212.627: 74.7954% ( 33) 00:10:47.907 14212.627 - 14317.905: 75.2790% ( 52) 00:10:47.907 14317.905 - 14423.184: 75.7068% ( 46) 00:10:47.907 14423.184 - 14528.463: 76.3393% ( 68) 00:10:47.907 14528.463 - 14633.741: 76.9624% ( 67) 00:10:47.907 14633.741 - 14739.020: 77.4182% ( 49) 00:10:47.907 14739.020 - 14844.299: 77.9669% ( 59) 00:10:47.907 14844.299 - 14949.578: 78.5807% ( 66) 00:10:47.908 14949.578 - 15054.856: 79.1481% ( 61) 00:10:47.908 15054.856 - 15160.135: 79.5480% ( 43) 00:10:47.908 15160.135 - 15265.414: 79.9386% ( 42) 00:10:47.908 15265.414 - 15370.692: 80.4688% ( 57) 00:10:47.908 15370.692 - 15475.971: 80.9431% ( 51) 00:10:47.908 15475.971 - 15581.250: 81.2314% ( 31) 00:10:47.908 15581.250 - 15686.529: 81.6871% ( 49) 00:10:47.908 15686.529 - 15791.807: 82.0778% ( 42) 00:10:47.908 15791.807 - 15897.086: 82.5707% ( 53) 00:10:47.908 15897.086 - 16002.365: 83.0264% ( 49) 00:10:47.908 16002.365 - 16107.643: 83.7705% ( 80) 00:10:47.908 16107.643 - 16212.922: 84.3471% ( 62) 00:10:47.908 16212.922 - 16318.201: 84.7842% ( 47) 00:10:47.908 16318.201 - 16423.480: 85.1842% ( 43) 00:10:47.908 16423.480 - 16528.758: 85.6864% ( 54) 00:10:47.908 16528.758 - 16634.037: 86.0119% ( 35) 00:10:47.908 16634.037 - 16739.316: 86.2909% ( 30) 00:10:47.908 16739.316 - 16844.594: 86.5513% ( 28) 00:10:47.908 16844.594 - 16949.873: 86.8490% ( 32) 00:10:47.908 16949.873 - 17055.152: 87.2582% ( 44) 00:10:47.908 17055.152 - 17160.431: 87.8069% ( 59) 00:10:47.908 17160.431 - 17265.709: 88.2999% ( 53) 00:10:47.908 17265.709 - 17370.988: 88.7184% ( 45) 00:10:47.908 17370.988 - 17476.267: 89.2578% ( 58) 00:10:47.908 17476.267 - 17581.545: 89.8158% ( 60) 00:10:47.908 17581.545 - 17686.824: 90.6436% ( 89) 00:10:47.908 17686.824 - 17792.103: 91.4993% ( 92) 00:10:47.908 17792.103 - 17897.382: 92.0666% ( 61) 00:10:47.908 17897.382 - 18002.660: 92.6711% ( 65) 00:10:47.908 18002.660 - 18107.939: 93.3780% ( 76) 00:10:47.908 18107.939 - 18213.218: 94.0755% ( 75) 00:10:47.908 18213.218 - 18318.496: 94.7080% ( 68) 00:10:47.908 18318.496 - 18423.775: 95.0428% ( 36) 00:10:47.908 18423.775 - 18529.054: 95.4241% ( 41) 00:10:47.908 18529.054 - 18634.333: 95.7775% ( 38) 00:10:47.908 18634.333 - 18739.611: 96.2333% ( 49) 00:10:47.908 18739.611 - 18844.890: 96.7634% ( 57) 00:10:47.908 18844.890 - 18950.169: 97.1912% ( 46) 00:10:47.908 18950.169 - 19055.447: 97.5446% ( 38) 00:10:47.908 19055.447 - 19160.726: 97.9167% ( 40) 00:10:47.908 19160.726 - 19266.005: 98.1120% ( 21) 00:10:47.908 19266.005 - 19371.284: 98.3817% ( 29) 00:10:47.908 19371.284 - 19476.562: 98.5491% ( 18) 00:10:47.908 19476.562 - 19581.841: 98.6421% ( 10) 00:10:47.908 19581.841 - 19687.120: 98.7165% ( 8) 00:10:47.908 19687.120 - 19792.398: 98.7909% ( 8) 00:10:47.908 19792.398 - 19897.677: 98.8095% ( 2) 00:10:47.908 29899.155 - 30109.712: 98.8560% ( 5) 00:10:47.908 30109.712 - 30320.270: 98.9211% ( 7) 00:10:47.908 30320.270 - 30530.827: 98.9862% ( 7) 00:10:47.908 30530.827 - 30741.385: 99.0420% ( 6) 00:10:47.908 30741.385 - 30951.942: 99.0978% ( 6) 00:10:47.908 30951.942 - 31162.500: 99.1722% ( 8) 00:10:47.908 31162.500 - 31373.057: 99.2281% ( 6) 00:10:47.908 31373.057 - 31583.614: 99.2839% ( 6) 00:10:47.908 31583.614 - 31794.172: 99.3397% ( 6) 00:10:47.908 31794.172 - 32004.729: 99.3955% ( 6) 00:10:47.908 32004.729 - 32215.287: 99.4048% ( 1) 00:10:47.908 39795.354 - 40005.912: 99.4327% ( 3) 00:10:47.908 40005.912 - 40216.469: 99.4885% ( 6) 00:10:47.908 40216.469 - 40427.027: 99.5443% ( 6) 00:10:47.908 40427.027 - 40637.584: 99.6001% ( 6) 00:10:47.908 40637.584 - 40848.141: 99.6466% ( 5) 00:10:47.908 40848.141 - 41058.699: 99.7024% ( 6) 00:10:47.908 41058.699 - 41269.256: 99.7582% ( 6) 00:10:47.908 41269.256 - 41479.814: 99.8140% ( 6) 00:10:47.908 41479.814 - 41690.371: 99.8698% ( 6) 00:10:47.908 41690.371 - 41900.929: 99.9256% ( 6) 00:10:47.908 41900.929 - 42111.486: 99.9721% ( 5) 00:10:47.908 42111.486 - 42322.043: 100.0000% ( 3) 00:10:47.908 00:10:47.908 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:47.908 ============================================================================== 00:10:47.908 Range in us Cumulative IO count 00:10:47.908 7580.067 - 7632.707: 0.0093% ( 1) 00:10:47.908 7632.707 - 7685.346: 0.0186% ( 1) 00:10:47.908 7685.346 - 7737.986: 0.0558% ( 4) 00:10:47.908 7737.986 - 7790.625: 0.1395% ( 9) 00:10:47.908 7790.625 - 7843.264: 0.2697% ( 14) 00:10:47.908 7843.264 - 7895.904: 0.4743% ( 22) 00:10:47.908 7895.904 - 7948.543: 0.8092% ( 36) 00:10:47.908 7948.543 - 8001.182: 1.3207% ( 55) 00:10:47.908 8001.182 - 8053.822: 2.0461% ( 78) 00:10:47.908 8053.822 - 8106.461: 3.4226% ( 148) 00:10:47.908 8106.461 - 8159.100: 4.7247% ( 140) 00:10:47.908 8159.100 - 8211.740: 6.8917% ( 233) 00:10:47.908 8211.740 - 8264.379: 8.8356% ( 209) 00:10:47.908 8264.379 - 8317.018: 10.8817% ( 220) 00:10:47.908 8317.018 - 8369.658: 13.1696% ( 246) 00:10:47.908 8369.658 - 8422.297: 15.4390% ( 244) 00:10:47.908 8422.297 - 8474.937: 17.6618% ( 239) 00:10:47.908 8474.937 - 8527.576: 19.5033% ( 198) 00:10:47.908 8527.576 - 8580.215: 21.1310% ( 175) 00:10:47.908 8580.215 - 8632.855: 22.3400% ( 130) 00:10:47.908 8632.855 - 8685.494: 23.3724% ( 111) 00:10:47.908 8685.494 - 8738.133: 24.5071% ( 122) 00:10:47.908 8738.133 - 8790.773: 25.3720% ( 93) 00:10:47.908 8790.773 - 8843.412: 26.3300% ( 103) 00:10:47.908 8843.412 - 8896.051: 27.5763% ( 134) 00:10:47.908 8896.051 - 8948.691: 28.8318% ( 135) 00:10:47.908 8948.691 - 9001.330: 29.8642% ( 111) 00:10:47.908 9001.330 - 9053.969: 30.7850% ( 99) 00:10:47.908 9053.969 - 9106.609: 31.5476% ( 82) 00:10:47.908 9106.609 - 9159.248: 32.1243% ( 62) 00:10:47.908 9159.248 - 9211.888: 32.7102% ( 63) 00:10:47.908 9211.888 - 9264.527: 33.4914% ( 84) 00:10:47.908 9264.527 - 9317.166: 34.3378% ( 91) 00:10:47.908 9317.166 - 9369.806: 35.5562% ( 131) 00:10:47.908 9369.806 - 9422.445: 36.7281% ( 126) 00:10:47.908 9422.445 - 9475.084: 38.1603% ( 154) 00:10:47.908 9475.084 - 9527.724: 39.2578% ( 118) 00:10:47.908 9527.724 - 9580.363: 40.3553% ( 118) 00:10:47.908 9580.363 - 9633.002: 41.2946% ( 101) 00:10:47.908 9633.002 - 9685.642: 42.6246% ( 143) 00:10:47.908 9685.642 - 9738.281: 43.4152% ( 85) 00:10:47.908 9738.281 - 9790.920: 44.0755% ( 71) 00:10:47.908 9790.920 - 9843.560: 44.7731% ( 75) 00:10:47.908 9843.560 - 9896.199: 45.5171% ( 80) 00:10:47.908 9896.199 - 9948.839: 46.2333% ( 77) 00:10:47.908 9948.839 - 10001.478: 46.8843% ( 70) 00:10:47.908 10001.478 - 10054.117: 47.5539% ( 72) 00:10:47.908 10054.117 - 10106.757: 48.1678% ( 66) 00:10:47.908 10106.757 - 10159.396: 48.5491% ( 41) 00:10:47.908 10159.396 - 10212.035: 49.0141% ( 50) 00:10:47.908 10212.035 - 10264.675: 49.4606% ( 48) 00:10:47.908 10264.675 - 10317.314: 50.1023% ( 69) 00:10:47.908 10317.314 - 10369.953: 50.6324% ( 57) 00:10:47.908 10369.953 - 10422.593: 50.8185% ( 20) 00:10:47.908 10422.593 - 10475.232: 51.0045% ( 20) 00:10:47.908 10475.232 - 10527.871: 51.1812% ( 19) 00:10:47.908 10527.871 - 10580.511: 51.4230% ( 26) 00:10:47.908 10580.511 - 10633.150: 51.8787% ( 49) 00:10:47.908 10633.150 - 10685.790: 52.4368% ( 60) 00:10:47.908 10685.790 - 10738.429: 52.8925% ( 49) 00:10:47.908 10738.429 - 10791.068: 53.3017% ( 44) 00:10:47.908 10791.068 - 10843.708: 53.7109% ( 44) 00:10:47.908 10843.708 - 10896.347: 54.0644% ( 38) 00:10:47.908 10896.347 - 10948.986: 54.3806% ( 34) 00:10:47.908 10948.986 - 11001.626: 54.7898% ( 44) 00:10:47.908 11001.626 - 11054.265: 55.3478% ( 60) 00:10:47.908 11054.265 - 11106.904: 55.9710% ( 67) 00:10:47.908 11106.904 - 11159.544: 56.6313% ( 71) 00:10:47.908 11159.544 - 11212.183: 57.2545% ( 67) 00:10:47.908 11212.183 - 11264.822: 57.8218% ( 61) 00:10:47.908 11264.822 - 11317.462: 58.6589% ( 90) 00:10:47.908 11317.462 - 11370.101: 59.3285% ( 72) 00:10:47.908 11370.101 - 11422.741: 59.9144% ( 63) 00:10:47.908 11422.741 - 11475.380: 60.3981% ( 52) 00:10:47.908 11475.380 - 11528.019: 60.7515% ( 38) 00:10:47.908 11528.019 - 11580.659: 61.0584% ( 33) 00:10:47.908 11580.659 - 11633.298: 61.3188% ( 28) 00:10:47.909 11633.298 - 11685.937: 61.4676% ( 16) 00:10:47.909 11685.937 - 11738.577: 61.6164% ( 16) 00:10:47.909 11738.577 - 11791.216: 61.7839% ( 18) 00:10:47.909 11791.216 - 11843.855: 61.9234% ( 15) 00:10:47.909 11843.855 - 11896.495: 62.0443% ( 13) 00:10:47.909 11896.495 - 11949.134: 62.2675% ( 24) 00:10:47.909 11949.134 - 12001.773: 62.4163% ( 16) 00:10:47.909 12001.773 - 12054.413: 62.5651% ( 16) 00:10:47.909 12054.413 - 12107.052: 62.6860% ( 13) 00:10:47.909 12107.052 - 12159.692: 62.8534% ( 18) 00:10:47.909 12159.692 - 12212.331: 63.0301% ( 19) 00:10:47.909 12212.331 - 12264.970: 63.2440% ( 23) 00:10:47.909 12264.970 - 12317.610: 63.3836% ( 15) 00:10:47.909 12317.610 - 12370.249: 63.7370% ( 38) 00:10:47.909 12370.249 - 12422.888: 63.9230% ( 20) 00:10:47.909 12422.888 - 12475.528: 64.2392% ( 34) 00:10:47.909 12475.528 - 12528.167: 64.6391% ( 43) 00:10:47.909 12528.167 - 12580.806: 64.9833% ( 37) 00:10:47.909 12580.806 - 12633.446: 65.4297% ( 48) 00:10:47.909 12633.446 - 12686.085: 66.0342% ( 65) 00:10:47.909 12686.085 - 12738.724: 66.4900% ( 49) 00:10:47.909 12738.724 - 12791.364: 66.8620% ( 40) 00:10:47.909 12791.364 - 12844.003: 67.2433% ( 41) 00:10:47.909 12844.003 - 12896.643: 67.6060% ( 39) 00:10:47.909 12896.643 - 12949.282: 67.9688% ( 39) 00:10:47.909 12949.282 - 13001.921: 68.4431% ( 51) 00:10:47.909 13001.921 - 13054.561: 68.8709% ( 46) 00:10:47.909 13054.561 - 13107.200: 69.2987% ( 46) 00:10:47.909 13107.200 - 13159.839: 69.6150% ( 34) 00:10:47.909 13159.839 - 13212.479: 69.9777% ( 39) 00:10:47.909 13212.479 - 13265.118: 70.3683% ( 42) 00:10:47.909 13265.118 - 13317.757: 70.7403% ( 40) 00:10:47.909 13317.757 - 13370.397: 70.9728% ( 25) 00:10:47.909 13370.397 - 13423.036: 71.1217% ( 16) 00:10:47.909 13423.036 - 13475.676: 71.3356% ( 23) 00:10:47.909 13475.676 - 13580.954: 71.8936% ( 60) 00:10:47.909 13580.954 - 13686.233: 72.5167% ( 67) 00:10:47.909 13686.233 - 13791.512: 72.9911% ( 51) 00:10:47.909 13791.512 - 13896.790: 73.4096% ( 45) 00:10:47.909 13896.790 - 14002.069: 73.8374% ( 46) 00:10:47.909 14002.069 - 14107.348: 74.2001% ( 39) 00:10:47.909 14107.348 - 14212.627: 74.6187% ( 45) 00:10:47.909 14212.627 - 14317.905: 74.9442% ( 35) 00:10:47.909 14317.905 - 14423.184: 75.5673% ( 67) 00:10:47.909 14423.184 - 14528.463: 75.9766% ( 44) 00:10:47.909 14528.463 - 14633.741: 76.4044% ( 46) 00:10:47.909 14633.741 - 14739.020: 77.0089% ( 65) 00:10:47.909 14739.020 - 14844.299: 77.7530% ( 80) 00:10:47.909 14844.299 - 14949.578: 78.4691% ( 77) 00:10:47.909 14949.578 - 15054.856: 78.7946% ( 35) 00:10:47.909 15054.856 - 15160.135: 79.3899% ( 64) 00:10:47.909 15160.135 - 15265.414: 79.9107% ( 56) 00:10:47.909 15265.414 - 15370.692: 80.5432% ( 68) 00:10:47.909 15370.692 - 15475.971: 80.9431% ( 43) 00:10:47.909 15475.971 - 15581.250: 81.3337% ( 42) 00:10:47.909 15581.250 - 15686.529: 81.7429% ( 44) 00:10:47.909 15686.529 - 15791.807: 82.3661% ( 67) 00:10:47.909 15791.807 - 15897.086: 82.8497% ( 52) 00:10:47.909 15897.086 - 16002.365: 83.3519% ( 54) 00:10:47.909 16002.365 - 16107.643: 83.6589% ( 33) 00:10:47.909 16107.643 - 16212.922: 84.0774% ( 45) 00:10:47.909 16212.922 - 16318.201: 84.6261% ( 59) 00:10:47.909 16318.201 - 16423.480: 85.1562% ( 57) 00:10:47.909 16423.480 - 16528.758: 85.6864% ( 57) 00:10:47.909 16528.758 - 16634.037: 86.0677% ( 41) 00:10:47.909 16634.037 - 16739.316: 86.4025% ( 36) 00:10:47.909 16739.316 - 16844.594: 86.8211% ( 45) 00:10:47.909 16844.594 - 16949.873: 87.2303% ( 44) 00:10:47.909 16949.873 - 17055.152: 87.7697% ( 58) 00:10:47.909 17055.152 - 17160.431: 88.1975% ( 46) 00:10:47.909 17160.431 - 17265.709: 88.8858% ( 74) 00:10:47.909 17265.709 - 17370.988: 89.6205% ( 79) 00:10:47.909 17370.988 - 17476.267: 90.4855% ( 93) 00:10:47.909 17476.267 - 17581.545: 91.1458% ( 71) 00:10:47.909 17581.545 - 17686.824: 91.6853% ( 58) 00:10:47.909 17686.824 - 17792.103: 92.3270% ( 69) 00:10:47.909 17792.103 - 17897.382: 92.7083% ( 41) 00:10:47.909 17897.382 - 18002.660: 93.1827% ( 51) 00:10:47.909 18002.660 - 18107.939: 93.8058% ( 67) 00:10:47.909 18107.939 - 18213.218: 94.3080% ( 54) 00:10:47.909 18213.218 - 18318.496: 94.7080% ( 43) 00:10:47.909 18318.496 - 18423.775: 95.0335% ( 35) 00:10:47.909 18423.775 - 18529.054: 95.3869% ( 38) 00:10:47.909 18529.054 - 18634.333: 95.7961% ( 44) 00:10:47.909 18634.333 - 18739.611: 96.1403% ( 37) 00:10:47.909 18739.611 - 18844.890: 96.4100% ( 29) 00:10:47.909 18844.890 - 18950.169: 96.8006% ( 42) 00:10:47.909 18950.169 - 19055.447: 97.0889% ( 31) 00:10:47.909 19055.447 - 19160.726: 97.3865% ( 32) 00:10:47.909 19160.726 - 19266.005: 97.8516% ( 50) 00:10:47.909 19266.005 - 19371.284: 98.0841% ( 25) 00:10:47.909 19371.284 - 19476.562: 98.2050% ( 13) 00:10:47.909 19476.562 - 19581.841: 98.3073% ( 11) 00:10:47.909 19581.841 - 19687.120: 98.4189% ( 12) 00:10:47.909 19687.120 - 19792.398: 98.6607% ( 26) 00:10:47.909 19792.398 - 19897.677: 98.6886% ( 3) 00:10:47.909 19897.677 - 20002.956: 98.7165% ( 3) 00:10:47.909 20002.956 - 20108.235: 98.7537% ( 4) 00:10:47.909 20108.235 - 20213.513: 98.7816% ( 3) 00:10:47.909 20213.513 - 20318.792: 98.8095% ( 3) 00:10:47.909 28004.138 - 28214.696: 98.8467% ( 4) 00:10:47.909 28214.696 - 28425.253: 98.9025% ( 6) 00:10:47.909 28425.253 - 28635.810: 98.9676% ( 7) 00:10:47.909 28635.810 - 28846.368: 99.0234% ( 6) 00:10:47.909 28846.368 - 29056.925: 99.0885% ( 7) 00:10:47.909 29056.925 - 29267.483: 99.1536% ( 7) 00:10:47.909 29267.483 - 29478.040: 99.2094% ( 6) 00:10:47.909 29478.040 - 29688.598: 99.2746% ( 7) 00:10:47.909 29688.598 - 29899.155: 99.3304% ( 6) 00:10:47.909 29899.155 - 30109.712: 99.3955% ( 7) 00:10:47.909 30109.712 - 30320.270: 99.4048% ( 1) 00:10:47.909 38532.010 - 38742.567: 99.4141% ( 1) 00:10:47.909 38742.567 - 38953.124: 99.4699% ( 6) 00:10:47.909 38953.124 - 39163.682: 99.5257% ( 6) 00:10:47.909 39163.682 - 39374.239: 99.5722% ( 5) 00:10:47.909 39374.239 - 39584.797: 99.6280% ( 6) 00:10:47.909 39584.797 - 39795.354: 99.6838% ( 6) 00:10:47.909 39795.354 - 40005.912: 99.7303% ( 5) 00:10:47.909 40005.912 - 40216.469: 99.7861% ( 6) 00:10:47.909 40216.469 - 40427.027: 99.8419% ( 6) 00:10:47.909 40427.027 - 40637.584: 99.8977% ( 6) 00:10:47.909 40637.584 - 40848.141: 99.9535% ( 6) 00:10:47.909 40848.141 - 41058.699: 100.0000% ( 5) 00:10:47.909 00:10:47.909 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:47.909 ============================================================================== 00:10:47.909 Range in us Cumulative IO count 00:10:47.909 7527.428 - 7580.067: 0.0093% ( 1) 00:10:47.909 7737.986 - 7790.625: 0.0465% ( 4) 00:10:47.909 7790.625 - 7843.264: 0.1209% ( 8) 00:10:47.909 7843.264 - 7895.904: 0.3348% ( 23) 00:10:47.909 7895.904 - 7948.543: 0.7254% ( 42) 00:10:47.909 7948.543 - 8001.182: 1.5439% ( 88) 00:10:47.909 8001.182 - 8053.822: 2.7437% ( 129) 00:10:47.909 8053.822 - 8106.461: 4.0086% ( 136) 00:10:47.909 8106.461 - 8159.100: 5.4874% ( 159) 00:10:47.909 8159.100 - 8211.740: 6.7243% ( 133) 00:10:47.909 8211.740 - 8264.379: 8.1659% ( 155) 00:10:47.909 8264.379 - 8317.018: 9.7656% ( 172) 00:10:47.909 8317.018 - 8369.658: 11.5885% ( 196) 00:10:47.909 8369.658 - 8422.297: 13.5696% ( 213) 00:10:47.909 8422.297 - 8474.937: 15.9784% ( 259) 00:10:47.909 8474.937 - 8527.576: 18.1641% ( 235) 00:10:47.909 8527.576 - 8580.215: 20.2102% ( 220) 00:10:47.909 8580.215 - 8632.855: 21.8936% ( 181) 00:10:47.909 8632.855 - 8685.494: 23.4747% ( 170) 00:10:47.909 8685.494 - 8738.133: 24.9907% ( 163) 00:10:47.909 8738.133 - 8790.773: 26.2184% ( 132) 00:10:47.909 8790.773 - 8843.412: 27.1856% ( 104) 00:10:47.909 8843.412 - 8896.051: 28.3947% ( 130) 00:10:47.909 8896.051 - 8948.691: 29.3434% ( 102) 00:10:47.909 8948.691 - 9001.330: 30.1339% ( 85) 00:10:47.909 9001.330 - 9053.969: 30.8780% ( 80) 00:10:47.909 9053.969 - 9106.609: 31.6313% ( 81) 00:10:47.909 9106.609 - 9159.248: 32.4591% ( 89) 00:10:47.909 9159.248 - 9211.888: 33.3984% ( 101) 00:10:47.909 9211.888 - 9264.527: 34.2541% ( 92) 00:10:47.909 9264.527 - 9317.166: 34.9237% ( 72) 00:10:47.909 9317.166 - 9369.806: 35.6213% ( 75) 00:10:47.909 9369.806 - 9422.445: 36.7281% ( 119) 00:10:47.909 9422.445 - 9475.084: 37.5093% ( 84) 00:10:47.909 9475.084 - 9527.724: 38.6068% ( 118) 00:10:47.909 9527.724 - 9580.363: 39.9182% ( 141) 00:10:47.909 9580.363 - 9633.002: 41.0528% ( 122) 00:10:47.909 9633.002 - 9685.642: 42.1596% ( 119) 00:10:47.909 9685.642 - 9738.281: 43.3129% ( 124) 00:10:47.910 9738.281 - 9790.920: 44.1871% ( 94) 00:10:47.910 9790.920 - 9843.560: 45.0521% ( 93) 00:10:47.910 9843.560 - 9896.199: 46.0472% ( 107) 00:10:47.910 9896.199 - 9948.839: 47.1261% ( 116) 00:10:47.910 9948.839 - 10001.478: 48.0190% ( 96) 00:10:47.910 10001.478 - 10054.117: 48.5305% ( 55) 00:10:47.910 10054.117 - 10106.757: 49.0048% ( 51) 00:10:47.910 10106.757 - 10159.396: 49.5257% ( 56) 00:10:47.910 10159.396 - 10212.035: 49.9535% ( 46) 00:10:47.910 10212.035 - 10264.675: 50.2697% ( 34) 00:10:47.910 10264.675 - 10317.314: 50.4650% ( 21) 00:10:47.910 10317.314 - 10369.953: 50.6696% ( 22) 00:10:47.910 10369.953 - 10422.593: 50.8371% ( 18) 00:10:47.910 10422.593 - 10475.232: 51.0696% ( 25) 00:10:47.910 10475.232 - 10527.871: 51.3765% ( 33) 00:10:47.910 10527.871 - 10580.511: 51.6741% ( 32) 00:10:47.910 10580.511 - 10633.150: 51.9717% ( 32) 00:10:47.910 10633.150 - 10685.790: 52.3065% ( 36) 00:10:47.910 10685.790 - 10738.429: 52.8553% ( 59) 00:10:47.910 10738.429 - 10791.068: 53.3761% ( 56) 00:10:47.910 10791.068 - 10843.708: 54.1109% ( 79) 00:10:47.910 10843.708 - 10896.347: 54.6596% ( 59) 00:10:47.910 10896.347 - 10948.986: 55.2548% ( 64) 00:10:47.910 10948.986 - 11001.626: 55.8501% ( 64) 00:10:47.910 11001.626 - 11054.265: 56.3244% ( 51) 00:10:47.910 11054.265 - 11106.904: 57.0126% ( 74) 00:10:47.910 11106.904 - 11159.544: 57.4498% ( 47) 00:10:47.910 11159.544 - 11212.183: 57.9985% ( 59) 00:10:47.910 11212.183 - 11264.822: 58.3333% ( 36) 00:10:47.910 11264.822 - 11317.462: 58.6496% ( 34) 00:10:47.910 11317.462 - 11370.101: 58.9937% ( 37) 00:10:47.910 11370.101 - 11422.741: 59.2727% ( 30) 00:10:47.910 11422.741 - 11475.380: 59.7377% ( 50) 00:10:47.910 11475.380 - 11528.019: 60.1097% ( 40) 00:10:47.910 11528.019 - 11580.659: 60.4539% ( 37) 00:10:47.910 11580.659 - 11633.298: 60.6585% ( 22) 00:10:47.910 11633.298 - 11685.937: 60.9096% ( 27) 00:10:47.910 11685.937 - 11738.577: 61.1142% ( 22) 00:10:47.910 11738.577 - 11791.216: 61.3467% ( 25) 00:10:47.910 11791.216 - 11843.855: 61.4955% ( 16) 00:10:47.910 11843.855 - 11896.495: 61.6908% ( 21) 00:10:47.910 11896.495 - 11949.134: 61.9141% ( 24) 00:10:47.910 11949.134 - 12001.773: 62.2396% ( 35) 00:10:47.910 12001.773 - 12054.413: 62.4349% ( 21) 00:10:47.910 12054.413 - 12107.052: 62.7418% ( 33) 00:10:47.910 12107.052 - 12159.692: 63.0301% ( 31) 00:10:47.910 12159.692 - 12212.331: 63.2254% ( 21) 00:10:47.910 12212.331 - 12264.970: 63.5417% ( 34) 00:10:47.910 12264.970 - 12317.610: 63.9137% ( 40) 00:10:47.910 12317.610 - 12370.249: 64.2857% ( 40) 00:10:47.910 12370.249 - 12422.888: 64.5554% ( 29) 00:10:47.910 12422.888 - 12475.528: 65.1879% ( 68) 00:10:47.910 12475.528 - 12528.167: 65.4390% ( 27) 00:10:47.910 12528.167 - 12580.806: 65.9226% ( 52) 00:10:47.910 12580.806 - 12633.446: 66.1923% ( 29) 00:10:47.910 12633.446 - 12686.085: 66.4342% ( 26) 00:10:47.910 12686.085 - 12738.724: 66.9178% ( 52) 00:10:47.910 12738.724 - 12791.364: 67.4014% ( 52) 00:10:47.910 12791.364 - 12844.003: 67.6525% ( 27) 00:10:47.910 12844.003 - 12896.643: 68.0246% ( 40) 00:10:47.910 12896.643 - 12949.282: 68.4710% ( 48) 00:10:47.910 12949.282 - 13001.921: 68.7593% ( 31) 00:10:47.910 13001.921 - 13054.561: 68.9918% ( 25) 00:10:47.910 13054.561 - 13107.200: 69.2057% ( 23) 00:10:47.910 13107.200 - 13159.839: 69.2894% ( 9) 00:10:47.910 13159.839 - 13212.479: 69.3545% ( 7) 00:10:47.910 13212.479 - 13265.118: 69.4382% ( 9) 00:10:47.910 13265.118 - 13317.757: 69.5312% ( 10) 00:10:47.910 13317.757 - 13370.397: 69.6987% ( 18) 00:10:47.910 13370.397 - 13423.036: 69.8568% ( 17) 00:10:47.910 13423.036 - 13475.676: 70.1079% ( 27) 00:10:47.910 13475.676 - 13580.954: 70.7031% ( 64) 00:10:47.910 13580.954 - 13686.233: 71.4379% ( 79) 00:10:47.910 13686.233 - 13791.512: 72.2377% ( 86) 00:10:47.910 13791.512 - 13896.790: 72.9446% ( 76) 00:10:47.910 13896.790 - 14002.069: 73.4096% ( 50) 00:10:47.910 14002.069 - 14107.348: 73.8839% ( 51) 00:10:47.910 14107.348 - 14212.627: 74.4420% ( 60) 00:10:47.910 14212.627 - 14317.905: 74.8791% ( 47) 00:10:47.910 14317.905 - 14423.184: 75.3255% ( 48) 00:10:47.910 14423.184 - 14528.463: 75.7812% ( 49) 00:10:47.910 14528.463 - 14633.741: 76.3579% ( 62) 00:10:47.910 14633.741 - 14739.020: 77.3158% ( 103) 00:10:47.910 14739.020 - 14844.299: 78.1064% ( 85) 00:10:47.910 14844.299 - 14949.578: 78.8225% ( 77) 00:10:47.910 14949.578 - 15054.856: 79.2411% ( 45) 00:10:47.910 15054.856 - 15160.135: 79.5945% ( 38) 00:10:47.910 15160.135 - 15265.414: 80.1711% ( 62) 00:10:47.910 15265.414 - 15370.692: 80.7013% ( 57) 00:10:47.910 15370.692 - 15475.971: 81.1291% ( 46) 00:10:47.910 15475.971 - 15581.250: 81.7057% ( 62) 00:10:47.910 15581.250 - 15686.529: 82.1429% ( 47) 00:10:47.910 15686.529 - 15791.807: 82.4963% ( 38) 00:10:47.910 15791.807 - 15897.086: 82.8497% ( 38) 00:10:47.910 15897.086 - 16002.365: 83.2775% ( 46) 00:10:47.910 16002.365 - 16107.643: 84.0960% ( 88) 00:10:47.910 16107.643 - 16212.922: 84.5610% ( 50) 00:10:47.910 16212.922 - 16318.201: 85.2586% ( 75) 00:10:47.910 16318.201 - 16423.480: 85.7050% ( 48) 00:10:47.910 16423.480 - 16528.758: 86.0584% ( 38) 00:10:47.910 16528.758 - 16634.037: 86.5327% ( 51) 00:10:47.910 16634.037 - 16739.316: 87.1094% ( 62) 00:10:47.910 16739.316 - 16844.594: 87.8069% ( 75) 00:10:47.910 16844.594 - 16949.873: 88.3836% ( 62) 00:10:47.910 16949.873 - 17055.152: 88.8300% ( 48) 00:10:47.910 17055.152 - 17160.431: 89.2020% ( 40) 00:10:47.910 17160.431 - 17265.709: 89.4996% ( 32) 00:10:47.910 17265.709 - 17370.988: 89.8996% ( 43) 00:10:47.910 17370.988 - 17476.267: 90.4111% ( 55) 00:10:47.910 17476.267 - 17581.545: 91.2946% ( 95) 00:10:47.910 17581.545 - 17686.824: 91.8992% ( 65) 00:10:47.910 17686.824 - 17792.103: 92.5781% ( 73) 00:10:47.910 17792.103 - 17897.382: 93.1641% ( 63) 00:10:47.910 17897.382 - 18002.660: 93.5175% ( 38) 00:10:47.910 18002.660 - 18107.939: 93.9546% ( 47) 00:10:47.910 18107.939 - 18213.218: 94.4103% ( 49) 00:10:47.910 18213.218 - 18318.496: 94.8661% ( 49) 00:10:47.910 18318.496 - 18423.775: 95.1637% ( 32) 00:10:47.910 18423.775 - 18529.054: 95.5264% ( 39) 00:10:47.910 18529.054 - 18634.333: 95.6752% ( 16) 00:10:47.910 18634.333 - 18739.611: 95.8333% ( 17) 00:10:47.910 18739.611 - 18844.890: 95.9914% ( 17) 00:10:47.910 18844.890 - 18950.169: 96.1589% ( 18) 00:10:47.910 18950.169 - 19055.447: 96.2984% ( 15) 00:10:47.910 19055.447 - 19160.726: 96.5030% ( 22) 00:10:47.910 19160.726 - 19266.005: 96.8285% ( 35) 00:10:47.910 19266.005 - 19371.284: 97.2098% ( 41) 00:10:47.910 19371.284 - 19476.562: 97.4981% ( 31) 00:10:47.910 19476.562 - 19581.841: 97.6935% ( 21) 00:10:47.910 19581.841 - 19687.120: 98.0190% ( 35) 00:10:47.910 19687.120 - 19792.398: 98.2050% ( 20) 00:10:47.910 19792.398 - 19897.677: 98.3910% ( 20) 00:10:47.910 19897.677 - 20002.956: 98.5584% ( 18) 00:10:47.910 20002.956 - 20108.235: 98.6793% ( 13) 00:10:47.910 20108.235 - 20213.513: 98.7723% ( 10) 00:10:47.910 20213.513 - 20318.792: 98.8095% ( 4) 00:10:47.910 26635.515 - 26740.794: 98.8188% ( 1) 00:10:47.910 26951.351 - 27161.908: 98.8281% ( 1) 00:10:47.910 27161.908 - 27372.466: 98.9490% ( 13) 00:10:47.910 27372.466 - 27583.023: 99.0699% ( 13) 00:10:47.910 27583.023 - 27793.581: 99.1443% ( 8) 00:10:47.910 27793.581 - 28004.138: 99.1815% ( 4) 00:10:47.910 28004.138 - 28214.696: 99.2281% ( 5) 00:10:47.910 28214.696 - 28425.253: 99.2746% ( 5) 00:10:47.910 28425.253 - 28635.810: 99.3211% ( 5) 00:10:47.910 28635.810 - 28846.368: 99.3676% ( 5) 00:10:47.910 28846.368 - 29056.925: 99.4048% ( 4) 00:10:47.910 36215.878 - 36426.435: 99.4606% ( 6) 00:10:47.910 37479.222 - 37689.780: 99.4792% ( 2) 00:10:47.910 37689.780 - 37900.337: 99.5257% ( 5) 00:10:47.910 37900.337 - 38110.895: 99.5722% ( 5) 00:10:47.910 38110.895 - 38321.452: 99.6280% ( 6) 00:10:47.910 38321.452 - 38532.010: 99.6745% ( 5) 00:10:47.910 38532.010 - 38742.567: 99.7303% ( 6) 00:10:47.910 38742.567 - 38953.124: 99.7861% ( 6) 00:10:47.910 38953.124 - 39163.682: 99.8419% ( 6) 00:10:47.910 39163.682 - 39374.239: 99.8977% ( 6) 00:10:47.910 39374.239 - 39584.797: 99.9535% ( 6) 00:10:47.910 39584.797 - 39795.354: 100.0000% ( 5) 00:10:47.910 00:10:47.910 21:41:55 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:10:47.910 00:10:47.910 real 0m2.717s 00:10:47.910 user 0m2.258s 00:10:47.910 sys 0m0.346s 00:10:47.910 21:41:55 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:47.910 21:41:55 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:10:47.910 ************************************ 00:10:47.910 END TEST nvme_perf 00:10:47.910 ************************************ 00:10:47.910 21:41:55 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:47.910 21:41:55 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:47.910 21:41:55 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.910 21:41:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:47.910 ************************************ 00:10:47.910 START TEST nvme_hello_world 00:10:47.910 ************************************ 00:10:47.910 21:41:55 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:48.169 Initializing NVMe Controllers 00:10:48.169 Attached to 0000:00:10.0 00:10:48.169 Namespace ID: 1 size: 6GB 00:10:48.169 Attached to 0000:00:11.0 00:10:48.169 Namespace ID: 1 size: 5GB 00:10:48.169 Attached to 0000:00:13.0 00:10:48.170 Namespace ID: 1 size: 1GB 00:10:48.170 Attached to 0000:00:12.0 00:10:48.170 Namespace ID: 1 size: 4GB 00:10:48.170 Namespace ID: 2 size: 4GB 00:10:48.170 Namespace ID: 3 size: 4GB 00:10:48.170 Initialization complete. 00:10:48.170 INFO: using host memory buffer for IO 00:10:48.170 Hello world! 00:10:48.170 INFO: using host memory buffer for IO 00:10:48.170 Hello world! 00:10:48.170 INFO: using host memory buffer for IO 00:10:48.170 Hello world! 00:10:48.170 INFO: using host memory buffer for IO 00:10:48.170 Hello world! 00:10:48.170 INFO: using host memory buffer for IO 00:10:48.170 Hello world! 00:10:48.170 INFO: using host memory buffer for IO 00:10:48.170 Hello world! 00:10:48.429 ************************************ 00:10:48.429 END TEST nvme_hello_world 00:10:48.429 ************************************ 00:10:48.429 00:10:48.429 real 0m0.335s 00:10:48.429 user 0m0.122s 00:10:48.429 sys 0m0.166s 00:10:48.429 21:41:55 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.429 21:41:55 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:48.429 21:41:55 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:48.429 21:41:55 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:48.429 21:41:55 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:48.429 21:41:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:48.429 ************************************ 00:10:48.429 START TEST nvme_sgl 00:10:48.429 ************************************ 00:10:48.429 21:41:56 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:48.689 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:10:48.689 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:10:48.689 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:10:48.689 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:10:48.689 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:10:48.689 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:10:48.689 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:10:48.689 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:10:48.689 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:10:48.689 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:10:48.689 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:10:48.689 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:10:48.689 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:10:48.689 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:10:48.689 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:10:48.689 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:10:48.689 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:10:48.689 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:10:48.689 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:10:48.689 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:10:48.689 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:10:48.689 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:10:48.689 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:10:48.689 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:10:48.689 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:10:48.689 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:10:48.689 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:10:48.689 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:10:48.689 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:10:48.689 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:10:48.689 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:10:48.689 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:10:48.689 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:10:48.689 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:10:48.689 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:10:48.689 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:10:48.689 NVMe Readv/Writev Request test 00:10:48.689 Attached to 0000:00:10.0 00:10:48.689 Attached to 0000:00:11.0 00:10:48.689 Attached to 0000:00:13.0 00:10:48.689 Attached to 0000:00:12.0 00:10:48.689 0000:00:10.0: build_io_request_2 test passed 00:10:48.689 0000:00:10.0: build_io_request_4 test passed 00:10:48.689 0000:00:10.0: build_io_request_5 test passed 00:10:48.689 0000:00:10.0: build_io_request_6 test passed 00:10:48.689 0000:00:10.0: build_io_request_7 test passed 00:10:48.689 0000:00:10.0: build_io_request_10 test passed 00:10:48.689 0000:00:11.0: build_io_request_2 test passed 00:10:48.689 0000:00:11.0: build_io_request_4 test passed 00:10:48.689 0000:00:11.0: build_io_request_5 test passed 00:10:48.689 0000:00:11.0: build_io_request_6 test passed 00:10:48.689 0000:00:11.0: build_io_request_7 test passed 00:10:48.689 0000:00:11.0: build_io_request_10 test passed 00:10:48.689 Cleaning up... 00:10:48.689 ************************************ 00:10:48.689 END TEST nvme_sgl 00:10:48.689 ************************************ 00:10:48.689 00:10:48.689 real 0m0.396s 00:10:48.689 user 0m0.183s 00:10:48.689 sys 0m0.161s 00:10:48.689 21:41:56 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.689 21:41:56 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:10:48.948 21:41:56 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:48.948 21:41:56 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:48.948 21:41:56 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:48.948 21:41:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:48.948 ************************************ 00:10:48.948 START TEST nvme_e2edp 00:10:48.948 ************************************ 00:10:48.948 21:41:56 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:49.208 NVMe Write/Read with End-to-End data protection test 00:10:49.208 Attached to 0000:00:10.0 00:10:49.208 Attached to 0000:00:11.0 00:10:49.208 Attached to 0000:00:13.0 00:10:49.208 Attached to 0000:00:12.0 00:10:49.208 Cleaning up... 00:10:49.208 00:10:49.208 real 0m0.304s 00:10:49.208 user 0m0.105s 00:10:49.208 sys 0m0.152s 00:10:49.208 ************************************ 00:10:49.208 END TEST nvme_e2edp 00:10:49.208 ************************************ 00:10:49.208 21:41:56 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:49.208 21:41:56 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:10:49.208 21:41:56 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:49.208 21:41:56 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:49.208 21:41:56 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:49.208 21:41:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:49.208 ************************************ 00:10:49.208 START TEST nvme_reserve 00:10:49.208 ************************************ 00:10:49.208 21:41:56 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:49.466 ===================================================== 00:10:49.466 NVMe Controller at PCI bus 0, device 16, function 0 00:10:49.466 ===================================================== 00:10:49.466 Reservations: Not Supported 00:10:49.466 ===================================================== 00:10:49.466 NVMe Controller at PCI bus 0, device 17, function 0 00:10:49.466 ===================================================== 00:10:49.466 Reservations: Not Supported 00:10:49.466 ===================================================== 00:10:49.466 NVMe Controller at PCI bus 0, device 19, function 0 00:10:49.466 ===================================================== 00:10:49.466 Reservations: Not Supported 00:10:49.466 ===================================================== 00:10:49.466 NVMe Controller at PCI bus 0, device 18, function 0 00:10:49.466 ===================================================== 00:10:49.466 Reservations: Not Supported 00:10:49.466 Reservation test passed 00:10:49.466 00:10:49.466 real 0m0.291s 00:10:49.466 user 0m0.094s 00:10:49.466 sys 0m0.153s 00:10:49.466 ************************************ 00:10:49.466 END TEST nvme_reserve 00:10:49.466 ************************************ 00:10:49.466 21:41:57 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:49.466 21:41:57 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:10:49.466 21:41:57 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:49.466 21:41:57 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:49.466 21:41:57 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:49.466 21:41:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:49.466 ************************************ 00:10:49.466 START TEST nvme_err_injection 00:10:49.466 ************************************ 00:10:49.466 21:41:57 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:50.034 NVMe Error Injection test 00:10:50.034 Attached to 0000:00:10.0 00:10:50.034 Attached to 0000:00:11.0 00:10:50.034 Attached to 0000:00:13.0 00:10:50.034 Attached to 0000:00:12.0 00:10:50.034 0000:00:10.0: get features failed as expected 00:10:50.034 0000:00:11.0: get features failed as expected 00:10:50.034 0000:00:13.0: get features failed as expected 00:10:50.034 0000:00:12.0: get features failed as expected 00:10:50.034 0000:00:10.0: get features successfully as expected 00:10:50.034 0000:00:11.0: get features successfully as expected 00:10:50.034 0000:00:13.0: get features successfully as expected 00:10:50.034 0000:00:12.0: get features successfully as expected 00:10:50.034 0000:00:10.0: read failed as expected 00:10:50.034 0000:00:11.0: read failed as expected 00:10:50.034 0000:00:13.0: read failed as expected 00:10:50.034 0000:00:12.0: read failed as expected 00:10:50.034 0000:00:10.0: read successfully as expected 00:10:50.034 0000:00:11.0: read successfully as expected 00:10:50.034 0000:00:13.0: read successfully as expected 00:10:50.034 0000:00:12.0: read successfully as expected 00:10:50.034 Cleaning up... 00:10:50.034 00:10:50.034 real 0m0.323s 00:10:50.034 user 0m0.135s 00:10:50.034 sys 0m0.141s 00:10:50.034 21:41:57 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.034 ************************************ 00:10:50.034 END TEST nvme_err_injection 00:10:50.034 ************************************ 00:10:50.034 21:41:57 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:10:50.034 21:41:57 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:50.034 21:41:57 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:10:50.034 21:41:57 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.034 21:41:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:50.034 ************************************ 00:10:50.034 START TEST nvme_overhead 00:10:50.034 ************************************ 00:10:50.034 21:41:57 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:51.453 Initializing NVMe Controllers 00:10:51.453 Attached to 0000:00:10.0 00:10:51.453 Attached to 0000:00:11.0 00:10:51.453 Attached to 0000:00:13.0 00:10:51.453 Attached to 0000:00:12.0 00:10:51.453 Initialization complete. Launching workers. 00:10:51.453 submit (in ns) avg, min, max = 13166.8, 12130.9, 105037.8 00:10:51.453 complete (in ns) avg, min, max = 9009.9, 7760.6, 93322.9 00:10:51.453 00:10:51.453 Submit histogram 00:10:51.453 ================ 00:10:51.453 Range in us Cumulative Count 00:10:51.453 12.080 - 12.132: 0.0165% ( 1) 00:10:51.453 12.132 - 12.183: 0.0494% ( 2) 00:10:51.453 12.183 - 12.235: 0.1646% ( 7) 00:10:51.453 12.235 - 12.286: 0.5433% ( 23) 00:10:51.453 12.286 - 12.337: 1.8275% ( 78) 00:10:51.453 12.337 - 12.389: 6.0257% ( 255) 00:10:51.453 12.389 - 12.440: 13.6977% ( 466) 00:10:51.453 12.440 - 12.492: 23.4442% ( 592) 00:10:51.453 12.492 - 12.543: 34.4583% ( 669) 00:10:51.453 12.543 - 12.594: 42.5914% ( 494) 00:10:51.453 12.594 - 12.646: 47.7939% ( 316) 00:10:51.453 12.646 - 12.697: 51.4653% ( 223) 00:10:51.453 12.697 - 12.749: 54.5769% ( 189) 00:10:51.453 12.749 - 12.800: 57.5074% ( 178) 00:10:51.453 12.800 - 12.851: 60.3556% ( 173) 00:10:51.453 12.851 - 12.903: 63.2697% ( 177) 00:10:51.453 12.903 - 12.954: 66.5789% ( 201) 00:10:51.453 12.954 - 13.006: 69.7563% ( 193) 00:10:51.453 13.006 - 13.057: 73.1808% ( 208) 00:10:51.453 13.057 - 13.108: 76.3747% ( 194) 00:10:51.453 13.108 - 13.160: 79.2558% ( 175) 00:10:51.453 13.160 - 13.263: 83.8821% ( 281) 00:10:51.453 13.263 - 13.365: 87.6523% ( 229) 00:10:51.453 13.365 - 13.468: 90.7474% ( 188) 00:10:51.453 13.468 - 13.571: 92.6408% ( 115) 00:10:51.454 13.571 - 13.674: 93.3652% ( 44) 00:10:51.454 13.674 - 13.777: 93.7274% ( 22) 00:10:51.454 13.777 - 13.880: 93.9414% ( 13) 00:10:51.454 13.880 - 13.982: 94.0731% ( 8) 00:10:51.454 13.982 - 14.085: 94.1554% ( 5) 00:10:51.454 14.085 - 14.188: 94.2377% ( 5) 00:10:51.454 14.291 - 14.394: 94.3036% ( 4) 00:10:51.454 14.394 - 14.496: 94.3530% ( 3) 00:10:51.454 14.496 - 14.599: 94.3859% ( 2) 00:10:51.454 14.702 - 14.805: 94.4024% ( 1) 00:10:51.454 14.805 - 14.908: 94.4188% ( 1) 00:10:51.454 15.010 - 15.113: 94.4353% ( 1) 00:10:51.454 15.113 - 15.216: 94.4682% ( 2) 00:10:51.454 15.216 - 15.319: 94.4847% ( 1) 00:10:51.454 15.319 - 15.422: 94.5012% ( 1) 00:10:51.454 15.524 - 15.627: 94.5176% ( 1) 00:10:51.454 15.730 - 15.833: 94.5505% ( 2) 00:10:51.454 15.833 - 15.936: 94.5999% ( 3) 00:10:51.454 16.039 - 16.141: 94.6164% ( 1) 00:10:51.454 16.141 - 16.244: 94.6329% ( 1) 00:10:51.454 16.244 - 16.347: 94.6493% ( 1) 00:10:51.454 16.347 - 16.450: 94.6823% ( 2) 00:10:51.454 16.450 - 16.553: 94.7810% ( 6) 00:10:51.454 16.553 - 16.655: 94.9951% ( 13) 00:10:51.454 16.655 - 16.758: 95.0938% ( 6) 00:10:51.454 16.758 - 16.861: 95.2585% ( 10) 00:10:51.454 16.861 - 16.964: 95.3243% ( 4) 00:10:51.454 16.964 - 17.067: 95.5054% ( 11) 00:10:51.454 17.067 - 17.169: 95.6536% ( 9) 00:10:51.454 17.169 - 17.272: 95.7524% ( 6) 00:10:51.454 17.272 - 17.375: 95.8841% ( 8) 00:10:51.454 17.375 - 17.478: 96.0652% ( 11) 00:10:51.454 17.478 - 17.581: 96.2298% ( 10) 00:10:51.454 17.581 - 17.684: 96.4109% ( 11) 00:10:51.454 17.684 - 17.786: 96.6414% ( 14) 00:10:51.454 17.786 - 17.889: 96.8390% ( 12) 00:10:51.454 17.889 - 17.992: 97.0695% ( 14) 00:10:51.454 17.992 - 18.095: 97.3164% ( 15) 00:10:51.454 18.095 - 18.198: 97.4481% ( 8) 00:10:51.454 18.198 - 18.300: 97.5798% ( 8) 00:10:51.454 18.300 - 18.403: 97.7280% ( 9) 00:10:51.454 18.403 - 18.506: 97.9256% ( 12) 00:10:51.454 18.506 - 18.609: 98.1396% ( 13) 00:10:51.454 18.609 - 18.712: 98.1725% ( 2) 00:10:51.454 18.712 - 18.814: 98.3207% ( 9) 00:10:51.454 18.814 - 18.917: 98.3536% ( 2) 00:10:51.454 18.917 - 19.020: 98.4524% ( 6) 00:10:51.454 19.020 - 19.123: 98.5347% ( 5) 00:10:51.454 19.123 - 19.226: 98.6335% ( 6) 00:10:51.454 19.226 - 19.329: 98.6994% ( 4) 00:10:51.454 19.329 - 19.431: 98.7488% ( 3) 00:10:51.454 19.431 - 19.534: 98.7652% ( 1) 00:10:51.454 19.534 - 19.637: 98.7982% ( 2) 00:10:51.454 19.637 - 19.740: 98.8146% ( 1) 00:10:51.454 19.740 - 19.843: 98.8640% ( 3) 00:10:51.454 19.843 - 19.945: 98.8969% ( 2) 00:10:51.454 19.945 - 20.048: 98.9134% ( 1) 00:10:51.454 20.048 - 20.151: 98.9463% ( 2) 00:10:51.454 20.151 - 20.254: 98.9628% ( 1) 00:10:51.454 20.254 - 20.357: 99.0122% ( 3) 00:10:51.454 20.562 - 20.665: 99.0286% ( 1) 00:10:51.454 20.665 - 20.768: 99.0780% ( 3) 00:10:51.454 20.768 - 20.871: 99.0945% ( 1) 00:10:51.454 20.871 - 20.973: 99.1439% ( 3) 00:10:51.454 20.973 - 21.076: 99.1604% ( 1) 00:10:51.454 21.076 - 21.179: 99.1768% ( 1) 00:10:51.454 21.179 - 21.282: 99.1933% ( 1) 00:10:51.454 21.282 - 21.385: 99.2262% ( 2) 00:10:51.454 21.385 - 21.488: 99.2427% ( 1) 00:10:51.454 21.796 - 21.899: 99.2591% ( 1) 00:10:51.454 21.899 - 22.002: 99.2921% ( 2) 00:10:51.454 22.413 - 22.516: 99.3085% ( 1) 00:10:51.454 22.618 - 22.721: 99.3250% ( 1) 00:10:51.454 23.030 - 23.133: 99.3415% ( 1) 00:10:51.454 23.133 - 23.235: 99.3579% ( 1) 00:10:51.454 23.338 - 23.441: 99.3744% ( 1) 00:10:51.454 23.647 - 23.749: 99.3908% ( 1) 00:10:51.454 23.852 - 23.955: 99.4073% ( 1) 00:10:51.454 24.058 - 24.161: 99.4238% ( 1) 00:10:51.454 24.263 - 24.366: 99.4402% ( 1) 00:10:51.454 24.469 - 24.572: 99.4567% ( 1) 00:10:51.454 24.880 - 24.983: 99.4896% ( 2) 00:10:51.454 24.983 - 25.086: 99.5061% ( 1) 00:10:51.454 25.086 - 25.189: 99.5226% ( 1) 00:10:51.454 25.292 - 25.394: 99.5390% ( 1) 00:10:51.454 25.497 - 25.600: 99.5555% ( 1) 00:10:51.454 25.600 - 25.703: 99.6049% ( 3) 00:10:51.454 25.703 - 25.806: 99.6213% ( 1) 00:10:51.454 25.806 - 25.908: 99.6378% ( 1) 00:10:51.454 26.114 - 26.217: 99.6543% ( 1) 00:10:51.454 26.320 - 26.525: 99.6707% ( 1) 00:10:51.454 26.731 - 26.937: 99.6872% ( 1) 00:10:51.454 28.582 - 28.787: 99.7037% ( 1) 00:10:51.454 28.993 - 29.198: 99.7201% ( 1) 00:10:51.454 29.198 - 29.404: 99.7695% ( 3) 00:10:51.454 29.404 - 29.610: 99.7860% ( 1) 00:10:51.454 29.610 - 29.815: 99.8024% ( 1) 00:10:51.454 30.021 - 30.227: 99.8189% ( 1) 00:10:51.454 31.255 - 31.460: 99.8354% ( 1) 00:10:51.454 32.694 - 32.900: 99.8518% ( 1) 00:10:51.454 35.367 - 35.573: 99.8683% ( 1) 00:10:51.454 40.096 - 40.302: 99.8848% ( 1) 00:10:51.454 43.181 - 43.386: 99.9012% ( 1) 00:10:51.454 45.443 - 45.648: 99.9177% ( 1) 00:10:51.454 51.406 - 51.611: 99.9341% ( 1) 00:10:51.454 57.574 - 57.986: 99.9506% ( 1) 00:10:51.454 59.219 - 59.631: 99.9671% ( 1) 00:10:51.454 78.137 - 78.548: 99.9835% ( 1) 00:10:51.454 104.867 - 105.279: 100.0000% ( 1) 00:10:51.454 00:10:51.454 Complete histogram 00:10:51.454 ================== 00:10:51.454 Range in us Cumulative Count 00:10:51.454 7.711 - 7.762: 0.0165% ( 1) 00:10:51.454 7.762 - 7.814: 0.1646% ( 9) 00:10:51.454 7.814 - 7.865: 0.9220% ( 46) 00:10:51.454 7.865 - 7.916: 1.7781% ( 52) 00:10:51.454 7.916 - 7.968: 3.1281% ( 82) 00:10:51.454 7.968 - 8.019: 4.4616% ( 81) 00:10:51.454 8.019 - 8.071: 5.4495% ( 60) 00:10:51.454 8.071 - 8.122: 6.1903% ( 45) 00:10:51.454 8.122 - 8.173: 6.6842% ( 30) 00:10:51.454 8.173 - 8.225: 7.0629% ( 23) 00:10:51.454 8.225 - 8.276: 7.4745% ( 25) 00:10:51.454 8.276 - 8.328: 7.7379% ( 16) 00:10:51.454 8.328 - 8.379: 7.8861% ( 9) 00:10:51.454 8.379 - 8.431: 8.4952% ( 37) 00:10:51.454 8.431 - 8.482: 10.7014% ( 134) 00:10:51.454 8.482 - 8.533: 14.7843% ( 248) 00:10:51.454 8.533 - 8.585: 19.8880% ( 310) 00:10:51.454 8.585 - 8.636: 25.3046% ( 329) 00:10:51.454 8.636 - 8.688: 32.3675% ( 429) 00:10:51.454 8.688 - 8.739: 42.7725% ( 632) 00:10:51.454 8.739 - 8.790: 52.9141% ( 616) 00:10:51.454 8.790 - 8.842: 61.7056% ( 534) 00:10:51.454 8.842 - 8.893: 67.8136% ( 371) 00:10:51.454 8.893 - 8.945: 73.8558% ( 367) 00:10:51.454 8.945 - 8.996: 79.4205% ( 338) 00:10:51.454 8.996 - 9.047: 83.4047% ( 242) 00:10:51.454 9.047 - 9.099: 86.9279% ( 214) 00:10:51.454 9.099 - 9.150: 89.4962% ( 156) 00:10:51.454 9.150 - 9.202: 91.3895% ( 115) 00:10:51.454 9.202 - 9.253: 92.9536% ( 95) 00:10:51.454 9.253 - 9.304: 94.0402% ( 66) 00:10:51.454 9.304 - 9.356: 94.9951% ( 58) 00:10:51.454 9.356 - 9.407: 95.4560% ( 28) 00:10:51.454 9.407 - 9.459: 95.9664% ( 31) 00:10:51.454 9.459 - 9.510: 96.3945% ( 26) 00:10:51.454 9.510 - 9.561: 96.6743% ( 17) 00:10:51.454 9.561 - 9.613: 96.9048% ( 14) 00:10:51.454 9.613 - 9.664: 97.1683% ( 16) 00:10:51.454 9.664 - 9.716: 97.2835% ( 7) 00:10:51.454 9.716 - 9.767: 97.4152% ( 8) 00:10:51.454 9.767 - 9.818: 97.4975% ( 5) 00:10:51.454 9.870 - 9.921: 97.5140% ( 1) 00:10:51.454 9.921 - 9.973: 97.5798% ( 4) 00:10:51.454 9.973 - 10.024: 97.5963% ( 1) 00:10:51.454 10.076 - 10.127: 97.6128% ( 1) 00:10:51.454 10.281 - 10.333: 97.6292% ( 1) 00:10:51.454 10.692 - 10.744: 97.6457% ( 1) 00:10:51.454 11.052 - 11.104: 97.6622% ( 1) 00:10:51.454 11.206 - 11.258: 97.6786% ( 1) 00:10:51.454 12.080 - 12.132: 97.6951% ( 1) 00:10:51.454 12.851 - 12.903: 97.7116% ( 1) 00:10:51.454 13.365 - 13.468: 97.7280% ( 1) 00:10:51.454 13.468 - 13.571: 97.7445% ( 1) 00:10:51.454 13.571 - 13.674: 97.7774% ( 2) 00:10:51.454 13.777 - 13.880: 97.8433% ( 4) 00:10:51.454 13.880 - 13.982: 97.8927% ( 3) 00:10:51.454 13.982 - 14.085: 97.9420% ( 3) 00:10:51.454 14.085 - 14.188: 97.9914% ( 3) 00:10:51.454 14.188 - 14.291: 98.0902% ( 6) 00:10:51.454 14.291 - 14.394: 98.2219% ( 8) 00:10:51.454 14.394 - 14.496: 98.4195% ( 12) 00:10:51.454 14.496 - 14.599: 98.5018% ( 5) 00:10:51.454 14.599 - 14.702: 98.5347% ( 2) 00:10:51.454 14.702 - 14.805: 98.6006% ( 4) 00:10:51.454 14.805 - 14.908: 98.6500% ( 3) 00:10:51.454 14.908 - 15.010: 98.6994% ( 3) 00:10:51.454 15.010 - 15.113: 98.7817% ( 5) 00:10:51.454 15.113 - 15.216: 98.8146% ( 2) 00:10:51.454 15.216 - 15.319: 98.8475% ( 2) 00:10:51.454 15.422 - 15.524: 98.8640% ( 1) 00:10:51.454 15.524 - 15.627: 98.8805% ( 1) 00:10:51.454 15.627 - 15.730: 98.9299% ( 3) 00:10:51.454 15.730 - 15.833: 98.9463% ( 1) 00:10:51.454 15.936 - 16.039: 98.9628% ( 1) 00:10:51.454 17.169 - 17.272: 98.9957% ( 2) 00:10:51.454 17.272 - 17.375: 99.0122% ( 1) 00:10:51.454 17.478 - 17.581: 99.0286% ( 1) 00:10:51.454 17.684 - 17.786: 99.0451% ( 1) 00:10:51.454 17.889 - 17.992: 99.0616% ( 1) 00:10:51.454 18.300 - 18.403: 99.0780% ( 1) 00:10:51.454 18.506 - 18.609: 99.0945% ( 1) 00:10:51.454 18.609 - 18.712: 99.1110% ( 1) 00:10:51.454 19.020 - 19.123: 99.1604% ( 3) 00:10:51.454 19.226 - 19.329: 99.2097% ( 3) 00:10:51.454 19.329 - 19.431: 99.2427% ( 2) 00:10:51.455 19.431 - 19.534: 99.2591% ( 1) 00:10:51.455 19.534 - 19.637: 99.2921% ( 2) 00:10:51.455 19.637 - 19.740: 99.3415% ( 3) 00:10:51.455 19.740 - 19.843: 99.3579% ( 1) 00:10:51.455 20.254 - 20.357: 99.3908% ( 2) 00:10:51.455 20.357 - 20.459: 99.4073% ( 1) 00:10:51.455 20.871 - 20.973: 99.4238% ( 1) 00:10:51.455 20.973 - 21.076: 99.4402% ( 1) 00:10:51.455 21.076 - 21.179: 99.4732% ( 2) 00:10:51.455 21.179 - 21.282: 99.4896% ( 1) 00:10:51.455 21.282 - 21.385: 99.5061% ( 1) 00:10:51.455 21.488 - 21.590: 99.5555% ( 3) 00:10:51.455 21.590 - 21.693: 99.5719% ( 1) 00:10:51.455 21.899 - 22.002: 99.5884% ( 1) 00:10:51.455 23.647 - 23.749: 99.6049% ( 1) 00:10:51.455 23.852 - 23.955: 99.6213% ( 1) 00:10:51.455 24.469 - 24.572: 99.6378% ( 1) 00:10:51.455 24.983 - 25.086: 99.6543% ( 1) 00:10:51.455 25.086 - 25.189: 99.6707% ( 1) 00:10:51.455 25.292 - 25.394: 99.6872% ( 1) 00:10:51.455 25.497 - 25.600: 99.7366% ( 3) 00:10:51.455 25.600 - 25.703: 99.7530% ( 1) 00:10:51.455 25.703 - 25.806: 99.7695% ( 1) 00:10:51.455 25.908 - 26.011: 99.7860% ( 1) 00:10:51.455 26.114 - 26.217: 99.8024% ( 1) 00:10:51.455 26.320 - 26.525: 99.8189% ( 1) 00:10:51.455 26.525 - 26.731: 99.8354% ( 1) 00:10:51.455 27.553 - 27.759: 99.8518% ( 1) 00:10:51.455 30.843 - 31.049: 99.8683% ( 1) 00:10:51.455 32.077 - 32.283: 99.8848% ( 1) 00:10:51.455 34.750 - 34.956: 99.9012% ( 1) 00:10:51.455 36.601 - 36.806: 99.9177% ( 1) 00:10:51.455 38.040 - 38.246: 99.9341% ( 1) 00:10:51.455 43.592 - 43.798: 99.9506% ( 1) 00:10:51.455 44.003 - 44.209: 99.9671% ( 1) 00:10:51.455 86.773 - 87.184: 99.9835% ( 1) 00:10:51.455 92.941 - 93.353: 100.0000% ( 1) 00:10:51.455 00:10:51.455 ************************************ 00:10:51.455 END TEST nvme_overhead 00:10:51.455 ************************************ 00:10:51.455 00:10:51.455 real 0m1.317s 00:10:51.455 user 0m1.117s 00:10:51.455 sys 0m0.148s 00:10:51.455 21:41:58 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:51.455 21:41:58 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:10:51.455 21:41:58 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:51.455 21:41:58 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:10:51.455 21:41:58 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.455 21:41:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:51.455 ************************************ 00:10:51.455 START TEST nvme_arbitration 00:10:51.455 ************************************ 00:10:51.455 21:41:58 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:54.747 Initializing NVMe Controllers 00:10:54.747 Attached to 0000:00:10.0 00:10:54.747 Attached to 0000:00:11.0 00:10:54.747 Attached to 0000:00:13.0 00:10:54.747 Attached to 0000:00:12.0 00:10:54.747 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:10:54.747 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:10:54.747 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:10:54.747 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:10:54.747 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:10:54.747 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:10:54.747 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:10:54.747 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:10:54.747 Initialization complete. Launching workers. 00:10:54.747 Starting thread on core 1 with urgent priority queue 00:10:54.747 Starting thread on core 2 with urgent priority queue 00:10:54.747 Starting thread on core 3 with urgent priority queue 00:10:54.747 Starting thread on core 0 with urgent priority queue 00:10:54.747 QEMU NVMe Ctrl (12340 ) core 0: 533.33 IO/s 187.50 secs/100000 ios 00:10:54.747 QEMU NVMe Ctrl (12342 ) core 0: 533.33 IO/s 187.50 secs/100000 ios 00:10:54.747 QEMU NVMe Ctrl (12341 ) core 1: 533.33 IO/s 187.50 secs/100000 ios 00:10:54.747 QEMU NVMe Ctrl (12342 ) core 1: 533.33 IO/s 187.50 secs/100000 ios 00:10:54.747 QEMU NVMe Ctrl (12343 ) core 2: 554.67 IO/s 180.29 secs/100000 ios 00:10:54.747 QEMU NVMe Ctrl (12342 ) core 3: 597.33 IO/s 167.41 secs/100000 ios 00:10:54.747 ======================================================== 00:10:54.747 00:10:54.747 00:10:54.747 real 0m3.457s 00:10:54.747 user 0m9.426s 00:10:54.747 sys 0m0.174s 00:10:54.747 ************************************ 00:10:54.747 END TEST nvme_arbitration 00:10:54.747 ************************************ 00:10:54.747 21:42:02 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:54.747 21:42:02 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:10:55.006 21:42:02 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:10:55.006 21:42:02 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:55.006 21:42:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.006 21:42:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:55.006 ************************************ 00:10:55.006 START TEST nvme_single_aen 00:10:55.006 ************************************ 00:10:55.006 21:42:02 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:10:55.265 Asynchronous Event Request test 00:10:55.265 Attached to 0000:00:10.0 00:10:55.265 Attached to 0000:00:11.0 00:10:55.265 Attached to 0000:00:13.0 00:10:55.265 Attached to 0000:00:12.0 00:10:55.265 Reset controller to setup AER completions for this process 00:10:55.265 Registering asynchronous event callbacks... 00:10:55.265 Getting orig temperature thresholds of all controllers 00:10:55.265 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:55.265 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:55.265 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:55.265 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:55.265 Setting all controllers temperature threshold low to trigger AER 00:10:55.265 Waiting for all controllers temperature threshold to be set lower 00:10:55.265 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:55.265 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:55.265 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:55.265 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:55.265 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:55.265 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:55.265 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:55.265 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:55.265 Waiting for all controllers to trigger AER and reset threshold 00:10:55.265 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:55.265 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:55.265 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:55.265 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:55.265 Cleaning up... 00:10:55.265 ************************************ 00:10:55.265 END TEST nvme_single_aen 00:10:55.265 ************************************ 00:10:55.265 00:10:55.265 real 0m0.302s 00:10:55.265 user 0m0.105s 00:10:55.265 sys 0m0.152s 00:10:55.265 21:42:02 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:55.265 21:42:02 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:10:55.265 21:42:02 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:10:55.265 21:42:02 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:55.265 21:42:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.265 21:42:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:55.265 ************************************ 00:10:55.265 START TEST nvme_doorbell_aers 00:10:55.265 ************************************ 00:10:55.265 21:42:02 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:10:55.265 21:42:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:10:55.265 21:42:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:10:55.265 21:42:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:10:55.265 21:42:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:10:55.265 21:42:02 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:55.265 21:42:02 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:10:55.265 21:42:02 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:55.265 21:42:02 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:55.265 21:42:02 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:55.265 21:42:02 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:55.265 21:42:02 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:55.265 21:42:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:55.265 21:42:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:55.833 [2024-12-10 21:42:03.282308] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65887) is not found. Dropping the request. 00:11:05.821 Executing: test_write_invalid_db 00:11:05.821 Waiting for AER completion... 00:11:05.821 Failure: test_write_invalid_db 00:11:05.821 00:11:05.821 Executing: test_invalid_db_write_overflow_sq 00:11:05.821 Waiting for AER completion... 00:11:05.821 Failure: test_invalid_db_write_overflow_sq 00:11:05.821 00:11:05.821 Executing: test_invalid_db_write_overflow_cq 00:11:05.821 Waiting for AER completion... 00:11:05.821 Failure: test_invalid_db_write_overflow_cq 00:11:05.821 00:11:05.822 21:42:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:05.822 21:42:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:05.822 [2024-12-10 21:42:13.346072] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65887) is not found. Dropping the request. 00:11:15.801 Executing: test_write_invalid_db 00:11:15.801 Waiting for AER completion... 00:11:15.801 Failure: test_write_invalid_db 00:11:15.801 00:11:15.801 Executing: test_invalid_db_write_overflow_sq 00:11:15.801 Waiting for AER completion... 00:11:15.801 Failure: test_invalid_db_write_overflow_sq 00:11:15.801 00:11:15.801 Executing: test_invalid_db_write_overflow_cq 00:11:15.801 Waiting for AER completion... 00:11:15.801 Failure: test_invalid_db_write_overflow_cq 00:11:15.801 00:11:15.801 21:42:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:15.801 21:42:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:15.801 [2024-12-10 21:42:23.409294] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65887) is not found. Dropping the request. 00:11:25.774 Executing: test_write_invalid_db 00:11:25.774 Waiting for AER completion... 00:11:25.774 Failure: test_write_invalid_db 00:11:25.774 00:11:25.774 Executing: test_invalid_db_write_overflow_sq 00:11:25.774 Waiting for AER completion... 00:11:25.774 Failure: test_invalid_db_write_overflow_sq 00:11:25.774 00:11:25.774 Executing: test_invalid_db_write_overflow_cq 00:11:25.774 Waiting for AER completion... 00:11:25.774 Failure: test_invalid_db_write_overflow_cq 00:11:25.774 00:11:25.774 21:42:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:25.774 21:42:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:25.774 [2024-12-10 21:42:33.460305] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65887) is not found. Dropping the request. 00:11:35.752 Executing: test_write_invalid_db 00:11:35.752 Waiting for AER completion... 00:11:35.752 Failure: test_write_invalid_db 00:11:35.752 00:11:35.752 Executing: test_invalid_db_write_overflow_sq 00:11:35.752 Waiting for AER completion... 00:11:35.752 Failure: test_invalid_db_write_overflow_sq 00:11:35.752 00:11:35.752 Executing: test_invalid_db_write_overflow_cq 00:11:35.752 Waiting for AER completion... 00:11:35.752 Failure: test_invalid_db_write_overflow_cq 00:11:35.752 00:11:35.752 ************************************ 00:11:35.752 END TEST nvme_doorbell_aers 00:11:35.752 ************************************ 00:11:35.752 00:11:35.753 real 0m40.341s 00:11:35.753 user 0m28.670s 00:11:35.753 sys 0m11.295s 00:11:35.753 21:42:43 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.753 21:42:43 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:11:35.753 21:42:43 nvme -- nvme/nvme.sh@97 -- # uname 00:11:35.753 21:42:43 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:11:35.753 21:42:43 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:11:35.753 21:42:43 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:11:35.753 21:42:43 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.753 21:42:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:35.753 ************************************ 00:11:35.753 START TEST nvme_multi_aen 00:11:35.753 ************************************ 00:11:35.753 21:42:43 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:11:36.011 [2024-12-10 21:42:43.576378] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65887) is not found. Dropping the request. 00:11:36.011 [2024-12-10 21:42:43.576488] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65887) is not found. Dropping the request. 00:11:36.011 [2024-12-10 21:42:43.576506] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65887) is not found. Dropping the request. 00:11:36.011 [2024-12-10 21:42:43.578381] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65887) is not found. Dropping the request. 00:11:36.011 [2024-12-10 21:42:43.578426] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65887) is not found. Dropping the request. 00:11:36.011 [2024-12-10 21:42:43.578442] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65887) is not found. Dropping the request. 00:11:36.011 [2024-12-10 21:42:43.579846] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65887) is not found. Dropping the request. 00:11:36.011 [2024-12-10 21:42:43.580011] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65887) is not found. Dropping the request. 00:11:36.011 [2024-12-10 21:42:43.580031] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65887) is not found. Dropping the request. 00:11:36.011 [2024-12-10 21:42:43.581284] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65887) is not found. Dropping the request. 00:11:36.011 [2024-12-10 21:42:43.581317] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65887) is not found. Dropping the request. 00:11:36.011 [2024-12-10 21:42:43.581332] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65887) is not found. Dropping the request. 00:11:36.011 Child process pid: 66405 00:11:36.269 [Child] Asynchronous Event Request test 00:11:36.269 [Child] Attached to 0000:00:10.0 00:11:36.269 [Child] Attached to 0000:00:11.0 00:11:36.269 [Child] Attached to 0000:00:13.0 00:11:36.269 [Child] Attached to 0000:00:12.0 00:11:36.269 [Child] Registering asynchronous event callbacks... 00:11:36.269 [Child] Getting orig temperature thresholds of all controllers 00:11:36.269 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:36.269 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:36.269 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:36.269 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:36.269 [Child] Waiting for all controllers to trigger AER and reset threshold 00:11:36.269 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:36.269 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:36.269 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:36.269 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:36.269 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:36.269 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:36.269 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:36.269 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:36.269 [Child] Cleaning up... 00:11:36.269 Asynchronous Event Request test 00:11:36.269 Attached to 0000:00:10.0 00:11:36.269 Attached to 0000:00:11.0 00:11:36.269 Attached to 0000:00:13.0 00:11:36.269 Attached to 0000:00:12.0 00:11:36.269 Reset controller to setup AER completions for this process 00:11:36.269 Registering asynchronous event callbacks... 00:11:36.269 Getting orig temperature thresholds of all controllers 00:11:36.269 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:36.269 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:36.269 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:36.269 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:36.269 Setting all controllers temperature threshold low to trigger AER 00:11:36.269 Waiting for all controllers temperature threshold to be set lower 00:11:36.269 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:36.269 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:11:36.269 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:36.269 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:11:36.269 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:36.269 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:11:36.269 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:36.269 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:11:36.269 Waiting for all controllers to trigger AER and reset threshold 00:11:36.269 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:36.269 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:36.269 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:36.269 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:36.269 Cleaning up... 00:11:36.269 ************************************ 00:11:36.269 END TEST nvme_multi_aen 00:11:36.269 ************************************ 00:11:36.269 00:11:36.269 real 0m0.662s 00:11:36.269 user 0m0.230s 00:11:36.269 sys 0m0.326s 00:11:36.269 21:42:43 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:36.269 21:42:43 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:11:36.527 21:42:44 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:11:36.527 21:42:44 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:36.527 21:42:44 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:36.527 21:42:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:36.527 ************************************ 00:11:36.527 START TEST nvme_startup 00:11:36.527 ************************************ 00:11:36.527 21:42:44 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:11:36.785 Initializing NVMe Controllers 00:11:36.785 Attached to 0000:00:10.0 00:11:36.785 Attached to 0000:00:11.0 00:11:36.785 Attached to 0000:00:13.0 00:11:36.785 Attached to 0000:00:12.0 00:11:36.785 Initialization complete. 00:11:36.785 Time used:200004.609 (us). 00:11:36.785 00:11:36.785 real 0m0.308s 00:11:36.785 user 0m0.111s 00:11:36.785 sys 0m0.152s 00:11:36.785 ************************************ 00:11:36.785 END TEST nvme_startup 00:11:36.785 ************************************ 00:11:36.785 21:42:44 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:36.785 21:42:44 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:11:36.785 21:42:44 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:11:36.785 21:42:44 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:36.785 21:42:44 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:36.785 21:42:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:36.785 ************************************ 00:11:36.785 START TEST nvme_multi_secondary 00:11:36.785 ************************************ 00:11:36.785 21:42:44 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:11:36.785 21:42:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=66461 00:11:36.785 21:42:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:11:36.785 21:42:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=66462 00:11:36.785 21:42:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:11:36.785 21:42:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:11:40.116 Initializing NVMe Controllers 00:11:40.116 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:40.116 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:40.116 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:40.116 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:40.116 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:11:40.116 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:11:40.116 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:11:40.116 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:11:40.116 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:11:40.116 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:11:40.116 Initialization complete. Launching workers. 00:11:40.116 ======================================================== 00:11:40.116 Latency(us) 00:11:40.116 Device Information : IOPS MiB/s Average min max 00:11:40.116 PCIE (0000:00:10.0) NSID 1 from core 1: 5271.67 20.59 3032.85 1089.67 11654.56 00:11:40.116 PCIE (0000:00:11.0) NSID 1 from core 1: 5271.67 20.59 3034.97 1104.23 11866.46 00:11:40.116 PCIE (0000:00:13.0) NSID 1 from core 1: 5271.67 20.59 3035.21 1090.63 12109.50 00:11:40.116 PCIE (0000:00:12.0) NSID 1 from core 1: 5271.67 20.59 3035.29 1105.21 8023.88 00:11:40.116 PCIE (0000:00:12.0) NSID 2 from core 1: 5271.67 20.59 3035.61 1107.53 10422.06 00:11:40.116 PCIE (0000:00:12.0) NSID 3 from core 1: 5277.00 20.61 3032.65 1105.70 11187.91 00:11:40.116 ======================================================== 00:11:40.116 Total : 31635.37 123.58 3034.43 1089.67 12109.50 00:11:40.116 00:11:40.375 Initializing NVMe Controllers 00:11:40.375 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:40.375 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:40.375 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:40.375 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:40.375 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:11:40.375 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:11:40.375 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:11:40.375 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:11:40.375 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:11:40.375 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:11:40.375 Initialization complete. Launching workers. 00:11:40.375 ======================================================== 00:11:40.375 Latency(us) 00:11:40.375 Device Information : IOPS MiB/s Average min max 00:11:40.375 PCIE (0000:00:10.0) NSID 1 from core 2: 3233.59 12.63 4945.99 1214.25 11524.38 00:11:40.375 PCIE (0000:00:11.0) NSID 1 from core 2: 3233.59 12.63 4947.62 1110.86 16336.54 00:11:40.375 PCIE (0000:00:13.0) NSID 1 from core 2: 3233.59 12.63 4947.55 1289.57 12506.90 00:11:40.375 PCIE (0000:00:12.0) NSID 1 from core 2: 3233.59 12.63 4947.54 1271.76 12982.74 00:11:40.375 PCIE (0000:00:12.0) NSID 2 from core 2: 3233.59 12.63 4947.61 1235.12 13419.46 00:11:40.375 PCIE (0000:00:12.0) NSID 3 from core 2: 3233.59 12.63 4947.61 1007.82 12456.79 00:11:40.375 ======================================================== 00:11:40.375 Total : 19401.53 75.79 4947.32 1007.82 16336.54 00:11:40.375 00:11:40.375 21:42:48 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 66461 00:11:42.277 Initializing NVMe Controllers 00:11:42.277 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:42.277 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:42.277 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:42.277 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:42.277 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:42.277 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:42.277 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:42.277 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:42.277 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:42.277 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:42.277 Initialization complete. Launching workers. 00:11:42.277 ======================================================== 00:11:42.277 Latency(us) 00:11:42.277 Device Information : IOPS MiB/s Average min max 00:11:42.277 PCIE (0000:00:10.0) NSID 1 from core 0: 8207.68 32.06 1947.88 951.98 7215.79 00:11:42.277 PCIE (0000:00:11.0) NSID 1 from core 0: 8207.68 32.06 1948.93 967.13 7248.48 00:11:42.277 PCIE (0000:00:13.0) NSID 1 from core 0: 8207.68 32.06 1948.89 939.25 7509.28 00:11:42.277 PCIE (0000:00:12.0) NSID 1 from core 0: 8207.68 32.06 1948.85 912.68 7556.55 00:11:42.277 PCIE (0000:00:12.0) NSID 2 from core 0: 8207.68 32.06 1948.81 873.29 7001.66 00:11:42.277 PCIE (0000:00:12.0) NSID 3 from core 0: 8207.68 32.06 1948.76 768.78 7133.39 00:11:42.277 ======================================================== 00:11:42.277 Total : 49246.07 192.37 1948.69 768.78 7556.55 00:11:42.277 00:11:42.277 21:42:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 66462 00:11:42.277 21:42:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=66531 00:11:42.277 21:42:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:11:42.277 21:42:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=66532 00:11:42.277 21:42:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:11:42.277 21:42:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:11:45.564 Initializing NVMe Controllers 00:11:45.564 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:45.564 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:45.564 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:45.564 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:45.564 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:45.564 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:45.564 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:45.564 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:45.564 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:45.564 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:45.564 Initialization complete. Launching workers. 00:11:45.564 ======================================================== 00:11:45.564 Latency(us) 00:11:45.564 Device Information : IOPS MiB/s Average min max 00:11:45.564 PCIE (0000:00:10.0) NSID 1 from core 0: 5246.03 20.49 3047.69 964.99 5690.07 00:11:45.564 PCIE (0000:00:11.0) NSID 1 from core 0: 5246.03 20.49 3049.65 980.57 6292.09 00:11:45.564 PCIE (0000:00:13.0) NSID 1 from core 0: 5246.03 20.49 3049.96 904.96 5981.12 00:11:45.564 PCIE (0000:00:12.0) NSID 1 from core 0: 5246.03 20.49 3050.26 978.01 6373.34 00:11:45.564 PCIE (0000:00:12.0) NSID 2 from core 0: 5246.03 20.49 3050.40 970.76 5817.76 00:11:45.564 PCIE (0000:00:12.0) NSID 3 from core 0: 5251.36 20.51 3047.41 982.73 5799.16 00:11:45.564 ======================================================== 00:11:45.564 Total : 31481.52 122.97 3049.23 904.96 6373.34 00:11:45.564 00:11:45.565 Initializing NVMe Controllers 00:11:45.565 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:45.565 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:45.565 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:45.565 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:45.565 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:11:45.565 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:11:45.565 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:11:45.565 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:11:45.565 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:11:45.565 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:11:45.565 Initialization complete. Launching workers. 00:11:45.565 ======================================================== 00:11:45.565 Latency(us) 00:11:45.565 Device Information : IOPS MiB/s Average min max 00:11:45.565 PCIE (0000:00:10.0) NSID 1 from core 1: 4951.50 19.34 3228.90 1057.96 6632.36 00:11:45.565 PCIE (0000:00:11.0) NSID 1 from core 1: 4951.50 19.34 3230.65 1068.94 6147.97 00:11:45.565 PCIE (0000:00:13.0) NSID 1 from core 1: 4951.50 19.34 3230.65 1084.58 6304.07 00:11:45.565 PCIE (0000:00:12.0) NSID 1 from core 1: 4951.50 19.34 3230.61 1078.44 6907.44 00:11:45.565 PCIE (0000:00:12.0) NSID 2 from core 1: 4951.50 19.34 3230.57 1075.29 6710.47 00:11:45.565 PCIE (0000:00:12.0) NSID 3 from core 1: 4951.50 19.34 3230.58 1074.15 6101.15 00:11:45.565 ======================================================== 00:11:45.565 Total : 29709.02 116.05 3230.33 1057.96 6907.44 00:11:45.565 00:11:48.098 Initializing NVMe Controllers 00:11:48.098 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:48.098 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:48.098 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:48.098 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:48.098 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:11:48.098 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:11:48.098 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:11:48.098 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:11:48.098 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:11:48.098 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:11:48.098 Initialization complete. Launching workers. 00:11:48.098 ======================================================== 00:11:48.098 Latency(us) 00:11:48.098 Device Information : IOPS MiB/s Average min max 00:11:48.098 PCIE (0000:00:10.0) NSID 1 from core 2: 3124.80 12.21 5119.21 1132.72 12013.63 00:11:48.098 PCIE (0000:00:11.0) NSID 1 from core 2: 3124.80 12.21 5120.16 1152.84 11813.04 00:11:48.098 PCIE (0000:00:13.0) NSID 1 from core 2: 3124.80 12.21 5119.58 1126.16 13002.96 00:11:48.098 PCIE (0000:00:12.0) NSID 1 from core 2: 3124.80 12.21 5119.77 1130.12 11467.46 00:11:48.098 PCIE (0000:00:12.0) NSID 2 from core 2: 3124.80 12.21 5119.96 1136.87 11902.02 00:11:48.098 PCIE (0000:00:12.0) NSID 3 from core 2: 3124.80 12.21 5119.88 1153.93 11846.16 00:11:48.098 ======================================================== 00:11:48.098 Total : 18748.80 73.24 5119.76 1126.16 13002.96 00:11:48.098 00:11:48.098 ************************************ 00:11:48.098 END TEST nvme_multi_secondary 00:11:48.098 ************************************ 00:11:48.098 21:42:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 66531 00:11:48.098 21:42:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 66532 00:11:48.098 00:11:48.098 real 0m11.155s 00:11:48.098 user 0m18.599s 00:11:48.098 sys 0m1.078s 00:11:48.098 21:42:55 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.098 21:42:55 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:11:48.098 21:42:55 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:11:48.098 21:42:55 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:11:48.098 21:42:55 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/65469 ]] 00:11:48.098 21:42:55 nvme -- common/autotest_common.sh@1094 -- # kill 65469 00:11:48.098 21:42:55 nvme -- common/autotest_common.sh@1095 -- # wait 65469 00:11:48.098 [2024-12-10 21:42:55.622987] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66400) is not found. Dropping the request. 00:11:48.098 [2024-12-10 21:42:55.623479] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66400) is not found. Dropping the request. 00:11:48.098 [2024-12-10 21:42:55.623572] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66400) is not found. Dropping the request. 00:11:48.098 [2024-12-10 21:42:55.623627] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66400) is not found. Dropping the request. 00:11:48.098 [2024-12-10 21:42:55.629796] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66400) is not found. Dropping the request. 00:11:48.098 [2024-12-10 21:42:55.629872] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66400) is not found. Dropping the request. 00:11:48.098 [2024-12-10 21:42:55.629902] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66400) is not found. Dropping the request. 00:11:48.098 [2024-12-10 21:42:55.629934] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66400) is not found. Dropping the request. 00:11:48.098 [2024-12-10 21:42:55.634212] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66400) is not found. Dropping the request. 00:11:48.098 [2024-12-10 21:42:55.634312] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66400) is not found. Dropping the request. 00:11:48.098 [2024-12-10 21:42:55.634344] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66400) is not found. Dropping the request. 00:11:48.098 [2024-12-10 21:42:55.634376] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66400) is not found. Dropping the request. 00:11:48.098 [2024-12-10 21:42:55.638776] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66400) is not found. Dropping the request. 00:11:48.098 [2024-12-10 21:42:55.638835] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66400) is not found. Dropping the request. 00:11:48.098 [2024-12-10 21:42:55.638855] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66400) is not found. Dropping the request. 00:11:48.098 [2024-12-10 21:42:55.638878] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66400) is not found. Dropping the request. 00:11:48.098 [2024-12-10 21:42:55.813190] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:11:48.357 21:42:55 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:11:48.357 21:42:55 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:11:48.357 21:42:55 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:48.357 21:42:55 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:48.357 21:42:55 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.357 21:42:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:48.357 ************************************ 00:11:48.357 START TEST bdev_nvme_reset_stuck_adm_cmd 00:11:48.357 ************************************ 00:11:48.357 21:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:48.357 * Looking for test storage... 00:11:48.357 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:48.357 21:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:48.357 21:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:11:48.357 21:42:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:48.357 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:48.357 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:48.357 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:48.357 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:48.357 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:11:48.357 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:11:48.357 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:11:48.357 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:11:48.357 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:11:48.357 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:11:48.357 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:11:48.357 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:48.357 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:11:48.357 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:11:48.357 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:48.357 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:48.357 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:11:48.357 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:11:48.357 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:48.357 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:11:48.357 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:11:48.357 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:11:48.357 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:11:48.357 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:48.357 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:11:48.357 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:11:48.357 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:48.357 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:48.357 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:11:48.357 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:48.357 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:48.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.357 --rc genhtml_branch_coverage=1 00:11:48.357 --rc genhtml_function_coverage=1 00:11:48.357 --rc genhtml_legend=1 00:11:48.357 --rc geninfo_all_blocks=1 00:11:48.357 --rc geninfo_unexecuted_blocks=1 00:11:48.357 00:11:48.357 ' 00:11:48.617 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:48.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.617 --rc genhtml_branch_coverage=1 00:11:48.617 --rc genhtml_function_coverage=1 00:11:48.617 --rc genhtml_legend=1 00:11:48.617 --rc geninfo_all_blocks=1 00:11:48.617 --rc geninfo_unexecuted_blocks=1 00:11:48.617 00:11:48.617 ' 00:11:48.617 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:48.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.617 --rc genhtml_branch_coverage=1 00:11:48.617 --rc genhtml_function_coverage=1 00:11:48.617 --rc genhtml_legend=1 00:11:48.617 --rc geninfo_all_blocks=1 00:11:48.617 --rc geninfo_unexecuted_blocks=1 00:11:48.617 00:11:48.617 ' 00:11:48.617 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:48.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.617 --rc genhtml_branch_coverage=1 00:11:48.617 --rc genhtml_function_coverage=1 00:11:48.617 --rc genhtml_legend=1 00:11:48.617 --rc geninfo_all_blocks=1 00:11:48.617 --rc geninfo_unexecuted_blocks=1 00:11:48.617 00:11:48.617 ' 00:11:48.617 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:11:48.617 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:11:48.617 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:11:48.617 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:11:48.617 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:11:48.617 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:11:48.617 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:48.617 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:11:48.617 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:11:48.617 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:11:48.617 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:48.617 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:11:48.617 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:48.617 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:48.617 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:48.617 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:48.617 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:48.617 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:11:48.617 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:11:48.617 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:11:48.617 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=66699 00:11:48.617 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:11:48.617 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:48.617 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 66699 00:11:48.617 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 66699 ']' 00:11:48.617 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.617 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:48.617 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.617 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:48.617 21:42:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:48.617 [2024-12-10 21:42:56.320974] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:11:48.617 [2024-12-10 21:42:56.321132] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66699 ] 00:11:48.907 [2024-12-10 21:42:56.531080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:49.166 [2024-12-10 21:42:56.681866] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:49.166 [2024-12-10 21:42:56.682000] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:49.166 [2024-12-10 21:42:56.682223] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.166 [2024-12-10 21:42:56.682246] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:50.104 21:42:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:50.104 21:42:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:11:50.104 21:42:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:11:50.104 21:42:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.104 21:42:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:50.104 nvme0n1 00:11:50.104 21:42:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.104 21:42:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:11:50.104 21:42:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_aQht9.txt 00:11:50.104 21:42:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:11:50.104 21:42:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.104 21:42:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:50.104 true 00:11:50.104 21:42:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.104 21:42:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:11:50.104 21:42:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733866977 00:11:50.104 21:42:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=66723 00:11:50.104 21:42:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:11:50.104 21:42:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:50.104 21:42:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:11:52.637 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:11:52.637 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.637 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:52.637 [2024-12-10 21:42:59.795221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:11:52.637 [2024-12-10 21:42:59.795737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:11:52.637 [2024-12-10 21:42:59.795875] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:52.637 [2024-12-10 21:42:59.795990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.637 [2024-12-10 21:42:59.798323] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:11:52.637 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.637 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 66723 00:11:52.637 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 66723 00:11:52.637 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 66723 00:11:52.637 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:11:52.637 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:11:52.637 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:11:52.637 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.637 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:52.637 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.637 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:11:52.637 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_aQht9.txt 00:11:52.637 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:11:52.637 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:11:52.637 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:52.637 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:52.637 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:52.637 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:52.637 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:52.637 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:52.637 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:11:52.637 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:11:52.637 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:11:52.637 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:52.637 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:52.637 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:52.637 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:52.637 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:52.637 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:52.637 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:11:52.637 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:11:52.638 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_aQht9.txt 00:11:52.638 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 66699 00:11:52.638 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 66699 ']' 00:11:52.638 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 66699 00:11:52.638 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:11:52.638 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:52.638 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66699 00:11:52.638 killing process with pid 66699 00:11:52.638 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:52.638 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:52.638 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66699' 00:11:52.638 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 66699 00:11:52.638 21:42:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 66699 00:11:55.208 21:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:11:55.208 21:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:11:55.208 ************************************ 00:11:55.208 END TEST bdev_nvme_reset_stuck_adm_cmd 00:11:55.208 ************************************ 00:11:55.208 00:11:55.208 real 0m6.786s 00:11:55.208 user 0m23.468s 00:11:55.208 sys 0m0.938s 00:11:55.208 21:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:55.208 21:43:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:55.208 21:43:02 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:11:55.208 21:43:02 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:11:55.208 21:43:02 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:55.208 21:43:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:55.208 21:43:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:55.208 ************************************ 00:11:55.208 START TEST nvme_fio 00:11:55.208 ************************************ 00:11:55.208 21:43:02 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:11:55.208 21:43:02 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:11:55.208 21:43:02 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:11:55.208 21:43:02 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:11:55.208 21:43:02 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:55.208 21:43:02 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:11:55.208 21:43:02 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:55.208 21:43:02 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:55.208 21:43:02 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:55.208 21:43:02 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:55.208 21:43:02 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:55.208 21:43:02 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:11:55.208 21:43:02 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:11:55.208 21:43:02 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:55.208 21:43:02 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:55.208 21:43:02 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:55.466 21:43:03 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:55.466 21:43:03 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:55.725 21:43:03 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:55.725 21:43:03 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:55.725 21:43:03 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:55.725 21:43:03 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:11:55.725 21:43:03 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:55.725 21:43:03 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:11:55.725 21:43:03 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:55.725 21:43:03 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:11:55.725 21:43:03 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:11:55.725 21:43:03 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:11:55.725 21:43:03 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:55.725 21:43:03 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:11:55.725 21:43:03 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:11:55.725 21:43:03 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:55.725 21:43:03 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:55.725 21:43:03 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:11:55.725 21:43:03 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:55.725 21:43:03 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:56.003 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:56.003 fio-3.35 00:11:56.003 Starting 1 thread 00:11:59.290 00:11:59.290 test: (groupid=0, jobs=1): err= 0: pid=66882: Tue Dec 10 21:43:06 2024 00:11:59.290 read: IOPS=21.4k, BW=83.8MiB/s (87.8MB/s)(168MiB/2001msec) 00:11:59.290 slat (usec): min=4, max=178, avg= 5.41, stdev= 2.87 00:11:59.290 clat (usec): min=177, max=12977, avg=2978.42, stdev=738.43 00:11:59.290 lat (usec): min=182, max=13022, avg=2983.83, stdev=740.80 00:11:59.290 clat percentiles (usec): 00:11:59.290 | 1.00th=[ 2573], 5.00th=[ 2671], 10.00th=[ 2704], 20.00th=[ 2769], 00:11:59.290 | 30.00th=[ 2769], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2868], 00:11:59.290 | 70.00th=[ 2868], 80.00th=[ 2933], 90.00th=[ 3032], 95.00th=[ 3261], 00:11:59.290 | 99.00th=[ 6587], 99.50th=[ 6718], 99.90th=[ 7635], 99.95th=[ 9896], 00:11:59.290 | 99.99th=[12518] 00:11:59.290 bw ( KiB/s): min=72264, max=89936, per=97.39%, avg=83530.67, stdev=9787.71, samples=3 00:11:59.290 iops : min=18066, max=22484, avg=20882.67, stdev=2446.93, samples=3 00:11:59.290 write: IOPS=21.3k, BW=83.1MiB/s (87.1MB/s)(166MiB/2001msec); 0 zone resets 00:11:59.290 slat (usec): min=4, max=318, avg= 5.82, stdev= 3.18 00:11:59.290 clat (usec): min=193, max=12745, avg=2990.81, stdev=756.02 00:11:59.290 lat (usec): min=198, max=12764, avg=2996.64, stdev=758.43 00:11:59.290 clat percentiles (usec): 00:11:59.290 | 1.00th=[ 2606], 5.00th=[ 2671], 10.00th=[ 2737], 20.00th=[ 2769], 00:11:59.290 | 30.00th=[ 2802], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2868], 00:11:59.290 | 70.00th=[ 2900], 80.00th=[ 2933], 90.00th=[ 3032], 95.00th=[ 3294], 00:11:59.290 | 99.00th=[ 6587], 99.50th=[ 6783], 99.90th=[ 7701], 99.95th=[10028], 00:11:59.290 | 99.99th=[12256] 00:11:59.290 bw ( KiB/s): min=72232, max=90008, per=98.24%, avg=83610.67, stdev=9879.62, samples=3 00:11:59.290 iops : min=18058, max=22502, avg=20902.67, stdev=2469.90, samples=3 00:11:59.290 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:11:59.290 lat (msec) : 2=0.21%, 4=95.63%, 10=4.06%, 20=0.05% 00:11:59.290 cpu : usr=98.80%, sys=0.35%, ctx=31, majf=0, minf=608 00:11:59.290 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:59.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:59.290 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:59.290 issued rwts: total=42905,42575,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:59.290 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:59.290 00:11:59.290 Run status group 0 (all jobs): 00:11:59.290 READ: bw=83.8MiB/s (87.8MB/s), 83.8MiB/s-83.8MiB/s (87.8MB/s-87.8MB/s), io=168MiB (176MB), run=2001-2001msec 00:11:59.290 WRITE: bw=83.1MiB/s (87.1MB/s), 83.1MiB/s-83.1MiB/s (87.1MB/s-87.1MB/s), io=166MiB (174MB), run=2001-2001msec 00:11:59.549 ----------------------------------------------------- 00:11:59.549 Suppressions used: 00:11:59.549 count bytes template 00:11:59.549 1 32 /usr/src/fio/parse.c 00:11:59.549 1 8 libtcmalloc_minimal.so 00:11:59.549 ----------------------------------------------------- 00:11:59.549 00:11:59.549 21:43:07 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:59.549 21:43:07 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:59.549 21:43:07 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:59.549 21:43:07 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:59.808 21:43:07 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:59.808 21:43:07 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:00.378 21:43:07 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:00.378 21:43:07 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:00.378 21:43:07 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:00.378 21:43:07 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:00.378 21:43:07 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:00.378 21:43:07 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:00.378 21:43:07 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:00.378 21:43:07 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:12:00.378 21:43:07 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:00.378 21:43:07 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:00.378 21:43:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:00.378 21:43:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:12:00.378 21:43:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:00.378 21:43:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:00.378 21:43:07 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:00.378 21:43:07 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:12:00.378 21:43:07 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:00.378 21:43:07 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:00.378 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:00.378 fio-3.35 00:12:00.378 Starting 1 thread 00:12:04.570 00:12:04.570 test: (groupid=0, jobs=1): err= 0: pid=66948: Tue Dec 10 21:43:11 2024 00:12:04.570 read: IOPS=20.3k, BW=79.3MiB/s (83.1MB/s)(159MiB/2001msec) 00:12:04.570 slat (usec): min=4, max=255, avg= 5.68, stdev= 5.41 00:12:04.570 clat (usec): min=249, max=21331, avg=3133.85, stdev=1334.83 00:12:04.570 lat (usec): min=254, max=21400, avg=3139.53, stdev=1339.75 00:12:04.570 clat percentiles (usec): 00:12:04.570 | 1.00th=[ 2704], 5.00th=[ 2769], 10.00th=[ 2802], 20.00th=[ 2868], 00:12:04.570 | 30.00th=[ 2868], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2933], 00:12:04.570 | 70.00th=[ 2966], 80.00th=[ 2999], 90.00th=[ 3097], 95.00th=[ 3294], 00:12:04.570 | 99.00th=[12387], 99.50th=[12518], 99.90th=[12780], 99.95th=[15533], 00:12:04.570 | 99.99th=[20579] 00:12:04.570 bw ( KiB/s): min=64880, max=87792, per=98.21%, avg=79728.00, stdev=12874.66, samples=3 00:12:04.570 iops : min=16220, max=21948, avg=19932.00, stdev=3218.67, samples=3 00:12:04.570 write: IOPS=20.2k, BW=79.1MiB/s (82.9MB/s)(158MiB/2001msec); 0 zone resets 00:12:04.570 slat (nsec): min=4516, max=93626, avg=6134.78, stdev=5519.16 00:12:04.570 clat (usec): min=200, max=20924, avg=3147.46, stdev=1369.42 00:12:04.570 lat (usec): min=205, max=20966, avg=3153.59, stdev=1374.55 00:12:04.570 clat percentiles (usec): 00:12:04.570 | 1.00th=[ 2737], 5.00th=[ 2802], 10.00th=[ 2835], 20.00th=[ 2868], 00:12:04.570 | 30.00th=[ 2900], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2966], 00:12:04.570 | 70.00th=[ 2966], 80.00th=[ 2999], 90.00th=[ 3097], 95.00th=[ 3326], 00:12:04.570 | 99.00th=[12387], 99.50th=[12518], 99.90th=[12780], 99.95th=[16057], 00:12:04.570 | 99.99th=[20055] 00:12:04.570 bw ( KiB/s): min=65272, max=87784, per=98.42%, avg=79725.33, stdev=12544.57, samples=3 00:12:04.570 iops : min=16318, max=21946, avg=19931.33, stdev=3136.14, samples=3 00:12:04.570 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:12:04.570 lat (msec) : 2=0.05%, 4=97.38%, 10=0.59%, 20=1.92%, 50=0.01% 00:12:04.570 cpu : usr=99.00%, sys=0.15%, ctx=28, majf=0, minf=608 00:12:04.570 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:04.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.570 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:04.570 issued rwts: total=40611,40521,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:04.570 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:04.570 00:12:04.570 Run status group 0 (all jobs): 00:12:04.570 READ: bw=79.3MiB/s (83.1MB/s), 79.3MiB/s-79.3MiB/s (83.1MB/s-83.1MB/s), io=159MiB (166MB), run=2001-2001msec 00:12:04.570 WRITE: bw=79.1MiB/s (82.9MB/s), 79.1MiB/s-79.1MiB/s (82.9MB/s-82.9MB/s), io=158MiB (166MB), run=2001-2001msec 00:12:04.570 ----------------------------------------------------- 00:12:04.570 Suppressions used: 00:12:04.570 count bytes template 00:12:04.570 1 32 /usr/src/fio/parse.c 00:12:04.570 1 8 libtcmalloc_minimal.so 00:12:04.570 ----------------------------------------------------- 00:12:04.570 00:12:04.570 21:43:11 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:04.570 21:43:11 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:04.570 21:43:11 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:04.570 21:43:11 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:04.570 21:43:11 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:04.570 21:43:11 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:04.570 21:43:12 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:04.570 21:43:12 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:04.570 21:43:12 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:04.570 21:43:12 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:04.570 21:43:12 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:04.570 21:43:12 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:04.570 21:43:12 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:04.570 21:43:12 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:12:04.570 21:43:12 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:04.570 21:43:12 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:04.570 21:43:12 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:12:04.570 21:43:12 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:04.570 21:43:12 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:04.570 21:43:12 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:04.570 21:43:12 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:04.570 21:43:12 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:12:04.570 21:43:12 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:04.570 21:43:12 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:04.829 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:04.829 fio-3.35 00:12:04.830 Starting 1 thread 00:12:09.020 00:12:09.020 test: (groupid=0, jobs=1): err= 0: pid=67014: Tue Dec 10 21:43:16 2024 00:12:09.020 read: IOPS=21.7k, BW=84.9MiB/s (89.1MB/s)(170MiB/2001msec) 00:12:09.020 slat (usec): min=4, max=103, avg= 4.97, stdev= 1.12 00:12:09.020 clat (usec): min=212, max=10422, avg=2938.52, stdev=275.35 00:12:09.020 lat (usec): min=217, max=10525, avg=2943.48, stdev=275.65 00:12:09.020 clat percentiles (usec): 00:12:09.020 | 1.00th=[ 2376], 5.00th=[ 2769], 10.00th=[ 2802], 20.00th=[ 2835], 00:12:09.020 | 30.00th=[ 2868], 40.00th=[ 2900], 50.00th=[ 2900], 60.00th=[ 2933], 00:12:09.020 | 70.00th=[ 2966], 80.00th=[ 2999], 90.00th=[ 3097], 95.00th=[ 3195], 00:12:09.020 | 99.00th=[ 3818], 99.50th=[ 4293], 99.90th=[ 5997], 99.95th=[ 7767], 00:12:09.020 | 99.99th=[10028] 00:12:09.020 bw ( KiB/s): min=85048, max=87504, per=98.75%, avg=85880.00, stdev=1406.57, samples=3 00:12:09.020 iops : min=21262, max=21876, avg=21470.00, stdev=351.64, samples=3 00:12:09.020 write: IOPS=21.6k, BW=84.3MiB/s (88.4MB/s)(169MiB/2001msec); 0 zone resets 00:12:09.020 slat (nsec): min=4383, max=32525, avg=5310.13, stdev=914.49 00:12:09.020 clat (usec): min=193, max=10225, avg=2942.17, stdev=283.47 00:12:09.020 lat (usec): min=198, max=10247, avg=2947.48, stdev=283.71 00:12:09.020 clat percentiles (usec): 00:12:09.020 | 1.00th=[ 2278], 5.00th=[ 2769], 10.00th=[ 2802], 20.00th=[ 2835], 00:12:09.020 | 30.00th=[ 2868], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2933], 00:12:09.020 | 70.00th=[ 2966], 80.00th=[ 2999], 90.00th=[ 3097], 95.00th=[ 3195], 00:12:09.020 | 99.00th=[ 3851], 99.50th=[ 4293], 99.90th=[ 6194], 99.95th=[ 8029], 00:12:09.020 | 99.99th=[ 9765] 00:12:09.020 bw ( KiB/s): min=84872, max=88160, per=99.68%, avg=86058.67, stdev=1824.88, samples=3 00:12:09.020 iops : min=21218, max=22040, avg=21514.67, stdev=456.22, samples=3 00:12:09.020 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:12:09.020 lat (msec) : 2=0.48%, 4=98.67%, 10=0.81%, 20=0.01% 00:12:09.020 cpu : usr=99.30%, sys=0.20%, ctx=3, majf=0, minf=609 00:12:09.020 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:09.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:09.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:09.020 issued rwts: total=43506,43189,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:09.020 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:09.020 00:12:09.020 Run status group 0 (all jobs): 00:12:09.020 READ: bw=84.9MiB/s (89.1MB/s), 84.9MiB/s-84.9MiB/s (89.1MB/s-89.1MB/s), io=170MiB (178MB), run=2001-2001msec 00:12:09.020 WRITE: bw=84.3MiB/s (88.4MB/s), 84.3MiB/s-84.3MiB/s (88.4MB/s-88.4MB/s), io=169MiB (177MB), run=2001-2001msec 00:12:09.020 ----------------------------------------------------- 00:12:09.020 Suppressions used: 00:12:09.020 count bytes template 00:12:09.020 1 32 /usr/src/fio/parse.c 00:12:09.020 1 8 libtcmalloc_minimal.so 00:12:09.020 ----------------------------------------------------- 00:12:09.020 00:12:09.020 21:43:16 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:09.020 21:43:16 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:09.020 21:43:16 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:09.020 21:43:16 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:09.020 21:43:16 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:09.020 21:43:16 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:09.591 21:43:17 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:09.591 21:43:17 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:09.591 21:43:17 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:09.591 21:43:17 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:09.591 21:43:17 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:09.591 21:43:17 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:09.591 21:43:17 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:09.591 21:43:17 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:12:09.591 21:43:17 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:09.591 21:43:17 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:09.591 21:43:17 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:09.591 21:43:17 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:12:09.591 21:43:17 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:09.591 21:43:17 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:09.591 21:43:17 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:09.591 21:43:17 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:12:09.591 21:43:17 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:09.591 21:43:17 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:09.591 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:09.591 fio-3.35 00:12:09.591 Starting 1 thread 00:12:14.906 00:12:14.906 test: (groupid=0, jobs=1): err= 0: pid=67078: Tue Dec 10 21:43:22 2024 00:12:14.906 read: IOPS=21.1k, BW=82.3MiB/s (86.3MB/s)(165MiB/2001msec) 00:12:14.906 slat (nsec): min=3719, max=68946, avg=4725.99, stdev=1504.45 00:12:14.906 clat (usec): min=177, max=10726, avg=3026.09, stdev=504.93 00:12:14.906 lat (usec): min=181, max=10795, avg=3030.81, stdev=505.69 00:12:14.906 clat percentiles (usec): 00:12:14.906 | 1.00th=[ 2671], 5.00th=[ 2737], 10.00th=[ 2802], 20.00th=[ 2868], 00:12:14.906 | 30.00th=[ 2900], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 2999], 00:12:14.906 | 70.00th=[ 3032], 80.00th=[ 3064], 90.00th=[ 3130], 95.00th=[ 3228], 00:12:14.906 | 99.00th=[ 5473], 99.50th=[ 7635], 99.90th=[ 8586], 99.95th=[ 8717], 00:12:14.906 | 99.99th=[10421] 00:12:14.906 bw ( KiB/s): min=77464, max=85872, per=98.22%, avg=82802.67, stdev=4640.69, samples=3 00:12:14.906 iops : min=19366, max=21468, avg=20700.67, stdev=1160.17, samples=3 00:12:14.906 write: IOPS=20.9k, BW=81.8MiB/s (85.8MB/s)(164MiB/2001msec); 0 zone resets 00:12:14.906 slat (usec): min=3, max=111, avg= 5.11, stdev= 1.52 00:12:14.906 clat (usec): min=208, max=10541, avg=3037.78, stdev=520.40 00:12:14.906 lat (usec): min=214, max=10563, avg=3042.89, stdev=521.18 00:12:14.906 clat percentiles (usec): 00:12:14.906 | 1.00th=[ 2671], 5.00th=[ 2769], 10.00th=[ 2835], 20.00th=[ 2868], 00:12:14.906 | 30.00th=[ 2900], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 2999], 00:12:14.906 | 70.00th=[ 3032], 80.00th=[ 3097], 90.00th=[ 3163], 95.00th=[ 3228], 00:12:14.906 | 99.00th=[ 5735], 99.50th=[ 7963], 99.90th=[ 8586], 99.95th=[ 8717], 00:12:14.906 | 99.99th=[10028] 00:12:14.906 bw ( KiB/s): min=77280, max=85640, per=98.84%, avg=82824.00, stdev=4801.45, samples=3 00:12:14.906 iops : min=19320, max=21410, avg=20706.00, stdev=1200.36, samples=3 00:12:14.906 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:12:14.906 lat (msec) : 2=0.05%, 4=98.20%, 10=1.69%, 20=0.02% 00:12:14.906 cpu : usr=99.20%, sys=0.15%, ctx=2, majf=0, minf=606 00:12:14.906 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:14.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:14.906 issued rwts: total=42171,41921,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:14.906 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:14.906 00:12:14.906 Run status group 0 (all jobs): 00:12:14.906 READ: bw=82.3MiB/s (86.3MB/s), 82.3MiB/s-82.3MiB/s (86.3MB/s-86.3MB/s), io=165MiB (173MB), run=2001-2001msec 00:12:14.906 WRITE: bw=81.8MiB/s (85.8MB/s), 81.8MiB/s-81.8MiB/s (85.8MB/s-85.8MB/s), io=164MiB (172MB), run=2001-2001msec 00:12:14.906 ----------------------------------------------------- 00:12:14.906 Suppressions used: 00:12:14.906 count bytes template 00:12:14.906 1 32 /usr/src/fio/parse.c 00:12:14.906 1 8 libtcmalloc_minimal.so 00:12:14.906 ----------------------------------------------------- 00:12:14.906 00:12:14.906 21:43:22 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:14.906 21:43:22 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:12:14.906 00:12:14.906 real 0m19.678s 00:12:14.906 user 0m14.698s 00:12:14.906 sys 0m5.715s 00:12:14.906 21:43:22 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.906 ************************************ 00:12:14.906 END TEST nvme_fio 00:12:14.906 ************************************ 00:12:14.906 21:43:22 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:12:14.906 ************************************ 00:12:14.906 END TEST nvme 00:12:14.906 ************************************ 00:12:14.906 00:12:14.906 real 1m35.623s 00:12:14.906 user 3m45.009s 00:12:14.906 sys 0m25.187s 00:12:14.906 21:43:22 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.906 21:43:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:14.906 21:43:22 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:12:14.906 21:43:22 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:12:14.906 21:43:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:14.906 21:43:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.906 21:43:22 -- common/autotest_common.sh@10 -- # set +x 00:12:14.906 ************************************ 00:12:14.906 START TEST nvme_scc 00:12:14.906 ************************************ 00:12:14.906 21:43:22 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:12:15.165 * Looking for test storage... 00:12:15.165 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:15.165 21:43:22 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:15.165 21:43:22 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:12:15.165 21:43:22 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:15.165 21:43:22 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:15.165 21:43:22 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:15.165 21:43:22 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:15.165 21:43:22 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:15.165 21:43:22 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:12:15.165 21:43:22 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:12:15.165 21:43:22 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:12:15.165 21:43:22 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:12:15.165 21:43:22 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:12:15.165 21:43:22 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:12:15.165 21:43:22 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:12:15.165 21:43:22 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:15.165 21:43:22 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:12:15.165 21:43:22 nvme_scc -- scripts/common.sh@345 -- # : 1 00:12:15.165 21:43:22 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:15.165 21:43:22 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:15.165 21:43:22 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:12:15.165 21:43:22 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:12:15.165 21:43:22 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:15.165 21:43:22 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:12:15.165 21:43:22 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:15.165 21:43:22 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:12:15.165 21:43:22 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:12:15.165 21:43:22 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:15.165 21:43:22 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:12:15.165 21:43:22 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:15.165 21:43:22 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:15.165 21:43:22 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:15.165 21:43:22 nvme_scc -- scripts/common.sh@368 -- # return 0 00:12:15.165 21:43:22 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:15.165 21:43:22 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:15.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.165 --rc genhtml_branch_coverage=1 00:12:15.165 --rc genhtml_function_coverage=1 00:12:15.165 --rc genhtml_legend=1 00:12:15.165 --rc geninfo_all_blocks=1 00:12:15.165 --rc geninfo_unexecuted_blocks=1 00:12:15.165 00:12:15.165 ' 00:12:15.165 21:43:22 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:15.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.165 --rc genhtml_branch_coverage=1 00:12:15.165 --rc genhtml_function_coverage=1 00:12:15.165 --rc genhtml_legend=1 00:12:15.165 --rc geninfo_all_blocks=1 00:12:15.165 --rc geninfo_unexecuted_blocks=1 00:12:15.165 00:12:15.165 ' 00:12:15.165 21:43:22 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:15.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.165 --rc genhtml_branch_coverage=1 00:12:15.165 --rc genhtml_function_coverage=1 00:12:15.165 --rc genhtml_legend=1 00:12:15.165 --rc geninfo_all_blocks=1 00:12:15.165 --rc geninfo_unexecuted_blocks=1 00:12:15.165 00:12:15.165 ' 00:12:15.165 21:43:22 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:15.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.165 --rc genhtml_branch_coverage=1 00:12:15.165 --rc genhtml_function_coverage=1 00:12:15.165 --rc genhtml_legend=1 00:12:15.165 --rc geninfo_all_blocks=1 00:12:15.165 --rc geninfo_unexecuted_blocks=1 00:12:15.165 00:12:15.165 ' 00:12:15.165 21:43:22 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:15.165 21:43:22 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:15.166 21:43:22 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:12:15.166 21:43:22 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:12:15.166 21:43:22 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:15.166 21:43:22 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:15.166 21:43:22 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.166 21:43:22 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.166 21:43:22 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.166 21:43:22 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.166 21:43:22 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.166 21:43:22 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.166 21:43:22 nvme_scc -- paths/export.sh@5 -- # export PATH 00:12:15.166 21:43:22 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.166 21:43:22 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:12:15.166 21:43:22 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:12:15.166 21:43:22 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:12:15.166 21:43:22 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:12:15.166 21:43:22 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:12:15.166 21:43:22 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:12:15.166 21:43:22 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:12:15.166 21:43:22 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:12:15.166 21:43:22 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:12:15.166 21:43:22 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:15.166 21:43:22 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:12:15.166 21:43:22 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:12:15.166 21:43:22 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:12:15.166 21:43:22 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:15.733 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:15.992 Waiting for block devices as requested 00:12:15.992 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:16.251 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:16.251 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:16.509 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:21.789 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:21.789 21:43:29 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:12:21.789 21:43:29 nvme_scc -- scripts/common.sh@18 -- # local i 00:12:21.789 21:43:29 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:12:21.789 21:43:29 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:21.789 21:43:29 nvme_scc -- scripts/common.sh@27 -- # return 0 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:12:21.789 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.790 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:21.791 21:43:29 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.792 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.793 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.794 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:12:21.795 21:43:29 nvme_scc -- scripts/common.sh@18 -- # local i 00:12:21.795 21:43:29 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:12:21.795 21:43:29 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:21.795 21:43:29 nvme_scc -- scripts/common.sh@27 -- # return 0 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.795 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:12:21.796 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.797 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:12:21.798 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:21.799 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:12:21.800 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:12:21.801 21:43:29 nvme_scc -- scripts/common.sh@18 -- # local i 00:12:21.801 21:43:29 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:12:21.801 21:43:29 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:21.801 21:43:29 nvme_scc -- scripts/common.sh@27 -- # return 0 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:12:21.801 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.802 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.803 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.804 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:21.805 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:21.806 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.075 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:12:22.075 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:12:22.075 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.075 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.075 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.075 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:12:22.075 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:12:22.075 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.075 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.075 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.075 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:12:22.075 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:12:22.075 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.075 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.075 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.075 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:12:22.075 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:12:22.075 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.075 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.075 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:22.075 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:12:22.075 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:12:22.075 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.075 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.075 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:22.075 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:12:22.075 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:12:22.075 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.075 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.075 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:22.075 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:12:22.075 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:12:22.075 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.075 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:12:22.076 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:22.077 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.078 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:12:22.079 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.080 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.081 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:12:22.082 21:43:29 nvme_scc -- scripts/common.sh@18 -- # local i 00:12:22.082 21:43:29 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:12:22.082 21:43:29 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:22.082 21:43:29 nvme_scc -- scripts/common.sh@27 -- # return 0 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:12:22.082 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.083 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:12:22.084 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:12:22.085 21:43:29 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:12:22.085 21:43:29 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:12:22.086 21:43:29 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:12:22.086 21:43:29 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:12:22.086 21:43:29 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:12:22.086 21:43:29 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:22.086 21:43:29 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:22.086 21:43:29 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:12:22.086 21:43:29 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:12:22.086 21:43:29 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:12:22.086 21:43:29 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:22.086 21:43:29 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:12:22.086 21:43:29 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:12:22.086 21:43:29 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:12:22.086 21:43:29 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:12:22.086 21:43:29 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:12:22.086 21:43:29 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:12:22.086 21:43:29 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:12:22.086 21:43:29 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:12:22.086 21:43:29 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:22.086 21:43:29 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:22.086 21:43:29 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:12:22.086 21:43:29 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:12:22.086 21:43:29 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:12:22.086 21:43:29 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:12:22.086 21:43:29 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:12:22.086 21:43:29 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:12:22.086 21:43:29 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:12:22.086 21:43:29 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:12:22.086 21:43:29 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:23.021 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:23.588 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:23.588 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:23.588 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:23.847 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:23.847 21:43:31 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:12:23.847 21:43:31 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:23.847 21:43:31 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.847 21:43:31 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:12:23.847 ************************************ 00:12:23.847 START TEST nvme_simple_copy 00:12:23.847 ************************************ 00:12:23.847 21:43:31 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:12:24.106 Initializing NVMe Controllers 00:12:24.106 Attaching to 0000:00:10.0 00:12:24.106 Controller supports SCC. Attached to 0000:00:10.0 00:12:24.106 Namespace ID: 1 size: 6GB 00:12:24.106 Initialization complete. 00:12:24.106 00:12:24.106 Controller QEMU NVMe Ctrl (12340 ) 00:12:24.106 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:12:24.106 Namespace Block Size:4096 00:12:24.106 Writing LBAs 0 to 63 with Random Data 00:12:24.106 Copied LBAs from 0 - 63 to the Destination LBA 256 00:12:24.106 LBAs matching Written Data: 64 00:12:24.106 00:12:24.106 real 0m0.321s 00:12:24.106 user 0m0.123s 00:12:24.106 sys 0m0.096s 00:12:24.106 21:43:31 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:24.106 21:43:31 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:12:24.106 ************************************ 00:12:24.106 END TEST nvme_simple_copy 00:12:24.106 ************************************ 00:12:24.106 ************************************ 00:12:24.106 END TEST nvme_scc 00:12:24.106 ************************************ 00:12:24.106 00:12:24.106 real 0m9.314s 00:12:24.106 user 0m1.668s 00:12:24.106 sys 0m2.486s 00:12:24.106 21:43:31 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:24.106 21:43:31 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:12:24.364 21:43:31 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:12:24.364 21:43:31 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:12:24.364 21:43:31 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:12:24.364 21:43:31 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:12:24.364 21:43:31 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:12:24.364 21:43:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:24.364 21:43:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:24.364 21:43:31 -- common/autotest_common.sh@10 -- # set +x 00:12:24.364 ************************************ 00:12:24.364 START TEST nvme_fdp 00:12:24.364 ************************************ 00:12:24.364 21:43:31 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:12:24.364 * Looking for test storage... 00:12:24.364 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:24.364 21:43:32 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:24.364 21:43:32 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:12:24.364 21:43:32 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:24.623 21:43:32 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:24.623 21:43:32 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:24.623 21:43:32 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:24.623 21:43:32 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:24.623 21:43:32 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:12:24.623 21:43:32 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:12:24.623 21:43:32 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:12:24.623 21:43:32 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:12:24.623 21:43:32 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:12:24.623 21:43:32 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:12:24.623 21:43:32 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:12:24.623 21:43:32 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:24.623 21:43:32 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:12:24.623 21:43:32 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:12:24.623 21:43:32 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:24.623 21:43:32 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:24.623 21:43:32 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:12:24.623 21:43:32 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:12:24.623 21:43:32 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:24.623 21:43:32 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:12:24.623 21:43:32 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:12:24.623 21:43:32 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:12:24.623 21:43:32 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:12:24.623 21:43:32 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:24.623 21:43:32 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:12:24.623 21:43:32 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:12:24.623 21:43:32 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:24.623 21:43:32 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:24.623 21:43:32 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:12:24.623 21:43:32 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:24.623 21:43:32 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:24.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.623 --rc genhtml_branch_coverage=1 00:12:24.623 --rc genhtml_function_coverage=1 00:12:24.623 --rc genhtml_legend=1 00:12:24.623 --rc geninfo_all_blocks=1 00:12:24.623 --rc geninfo_unexecuted_blocks=1 00:12:24.623 00:12:24.623 ' 00:12:24.623 21:43:32 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:24.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.623 --rc genhtml_branch_coverage=1 00:12:24.623 --rc genhtml_function_coverage=1 00:12:24.623 --rc genhtml_legend=1 00:12:24.623 --rc geninfo_all_blocks=1 00:12:24.623 --rc geninfo_unexecuted_blocks=1 00:12:24.623 00:12:24.623 ' 00:12:24.623 21:43:32 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:24.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.623 --rc genhtml_branch_coverage=1 00:12:24.623 --rc genhtml_function_coverage=1 00:12:24.623 --rc genhtml_legend=1 00:12:24.623 --rc geninfo_all_blocks=1 00:12:24.623 --rc geninfo_unexecuted_blocks=1 00:12:24.623 00:12:24.623 ' 00:12:24.623 21:43:32 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:24.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.623 --rc genhtml_branch_coverage=1 00:12:24.623 --rc genhtml_function_coverage=1 00:12:24.623 --rc genhtml_legend=1 00:12:24.623 --rc geninfo_all_blocks=1 00:12:24.623 --rc geninfo_unexecuted_blocks=1 00:12:24.623 00:12:24.623 ' 00:12:24.623 21:43:32 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:24.623 21:43:32 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:24.623 21:43:32 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:12:24.623 21:43:32 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:12:24.623 21:43:32 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:24.623 21:43:32 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:12:24.623 21:43:32 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:24.623 21:43:32 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:24.623 21:43:32 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:24.623 21:43:32 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.623 21:43:32 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.623 21:43:32 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.623 21:43:32 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:12:24.623 21:43:32 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.623 21:43:32 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:12:24.623 21:43:32 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:12:24.623 21:43:32 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:12:24.623 21:43:32 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:12:24.623 21:43:32 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:12:24.623 21:43:32 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:12:24.623 21:43:32 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:12:24.623 21:43:32 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:12:24.623 21:43:32 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:12:24.623 21:43:32 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:24.623 21:43:32 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:25.191 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:25.449 Waiting for block devices as requested 00:12:25.449 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:25.449 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:25.707 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:25.707 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:30.998 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:30.998 21:43:38 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:12:30.998 21:43:38 nvme_fdp -- scripts/common.sh@18 -- # local i 00:12:30.998 21:43:38 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:12:30.998 21:43:38 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:30.998 21:43:38 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:12:30.998 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:30.999 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:12:31.000 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.001 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:12:31.002 21:43:38 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.003 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:12:31.004 21:43:38 nvme_fdp -- scripts/common.sh@18 -- # local i 00:12:31.004 21:43:38 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:12:31.004 21:43:38 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:31.004 21:43:38 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:31.004 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.005 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.006 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.007 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:12:31.008 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.009 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:31.010 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.276 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.276 21:43:38 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:12:31.276 21:43:38 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:12:31.276 21:43:38 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:12:31.276 21:43:38 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:12:31.276 21:43:38 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:12:31.276 21:43:38 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:31.276 21:43:38 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:12:31.276 21:43:38 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:12:31.277 21:43:38 nvme_fdp -- scripts/common.sh@18 -- # local i 00:12:31.277 21:43:38 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:12:31.277 21:43:38 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:31.277 21:43:38 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:12:31.277 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.278 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:31.279 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.280 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.281 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:31.282 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.283 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.284 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:31.285 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.286 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.287 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.288 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:12:31.289 21:43:38 nvme_fdp -- scripts/common.sh@18 -- # local i 00:12:31.289 21:43:38 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:12:31.289 21:43:38 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:31.289 21:43:38 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.289 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.290 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:12:31.291 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:12:31.292 21:43:38 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:31.292 21:43:38 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:12:31.575 21:43:38 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:12:31.575 21:43:38 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:12:31.575 21:43:39 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:12:31.575 21:43:39 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:12:31.575 21:43:39 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:12:31.575 21:43:39 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:12:31.575 21:43:39 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:12:31.575 21:43:39 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:32.142 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:33.079 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:33.079 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:33.079 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:33.079 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:33.079 21:43:40 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:12:33.079 21:43:40 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:33.079 21:43:40 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:33.079 21:43:40 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:12:33.079 ************************************ 00:12:33.079 START TEST nvme_flexible_data_placement 00:12:33.079 ************************************ 00:12:33.079 21:43:40 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:12:33.339 Initializing NVMe Controllers 00:12:33.339 Attaching to 0000:00:13.0 00:12:33.339 Controller supports FDP Attached to 0000:00:13.0 00:12:33.339 Namespace ID: 1 Endurance Group ID: 1 00:12:33.339 Initialization complete. 00:12:33.339 00:12:33.339 ================================== 00:12:33.339 == FDP tests for Namespace: #01 == 00:12:33.339 ================================== 00:12:33.339 00:12:33.339 Get Feature: FDP: 00:12:33.339 ================= 00:12:33.339 Enabled: Yes 00:12:33.339 FDP configuration Index: 0 00:12:33.339 00:12:33.339 FDP configurations log page 00:12:33.339 =========================== 00:12:33.339 Number of FDP configurations: 1 00:12:33.339 Version: 0 00:12:33.339 Size: 112 00:12:33.339 FDP Configuration Descriptor: 0 00:12:33.339 Descriptor Size: 96 00:12:33.339 Reclaim Group Identifier format: 2 00:12:33.339 FDP Volatile Write Cache: Not Present 00:12:33.339 FDP Configuration: Valid 00:12:33.339 Vendor Specific Size: 0 00:12:33.339 Number of Reclaim Groups: 2 00:12:33.339 Number of Recalim Unit Handles: 8 00:12:33.339 Max Placement Identifiers: 128 00:12:33.339 Number of Namespaces Suppprted: 256 00:12:33.339 Reclaim unit Nominal Size: 6000000 bytes 00:12:33.339 Estimated Reclaim Unit Time Limit: Not Reported 00:12:33.339 RUH Desc #000: RUH Type: Initially Isolated 00:12:33.339 RUH Desc #001: RUH Type: Initially Isolated 00:12:33.339 RUH Desc #002: RUH Type: Initially Isolated 00:12:33.339 RUH Desc #003: RUH Type: Initially Isolated 00:12:33.339 RUH Desc #004: RUH Type: Initially Isolated 00:12:33.339 RUH Desc #005: RUH Type: Initially Isolated 00:12:33.339 RUH Desc #006: RUH Type: Initially Isolated 00:12:33.339 RUH Desc #007: RUH Type: Initially Isolated 00:12:33.339 00:12:33.339 FDP reclaim unit handle usage log page 00:12:33.339 ====================================== 00:12:33.339 Number of Reclaim Unit Handles: 8 00:12:33.339 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:12:33.339 RUH Usage Desc #001: RUH Attributes: Unused 00:12:33.339 RUH Usage Desc #002: RUH Attributes: Unused 00:12:33.339 RUH Usage Desc #003: RUH Attributes: Unused 00:12:33.339 RUH Usage Desc #004: RUH Attributes: Unused 00:12:33.339 RUH Usage Desc #005: RUH Attributes: Unused 00:12:33.339 RUH Usage Desc #006: RUH Attributes: Unused 00:12:33.339 RUH Usage Desc #007: RUH Attributes: Unused 00:12:33.339 00:12:33.339 FDP statistics log page 00:12:33.339 ======================= 00:12:33.339 Host bytes with metadata written: 963362816 00:12:33.339 Media bytes with metadata written: 964198400 00:12:33.339 Media bytes erased: 0 00:12:33.339 00:12:33.339 FDP Reclaim unit handle status 00:12:33.339 ============================== 00:12:33.339 Number of RUHS descriptors: 2 00:12:33.339 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000002944 00:12:33.339 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:12:33.339 00:12:33.339 FDP write on placement id: 0 success 00:12:33.339 00:12:33.339 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:12:33.339 00:12:33.339 IO mgmt send: RUH update for Placement ID: #0 Success 00:12:33.339 00:12:33.339 Get Feature: FDP Events for Placement handle: #0 00:12:33.339 ======================== 00:12:33.339 Number of FDP Events: 6 00:12:33.339 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:12:33.339 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:12:33.339 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:12:33.339 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:12:33.339 FDP Event: #4 Type: Media Reallocated Enabled: No 00:12:33.339 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:12:33.339 00:12:33.339 FDP events log page 00:12:33.339 =================== 00:12:33.339 Number of FDP events: 1 00:12:33.339 FDP Event #0: 00:12:33.339 Event Type: RU Not Written to Capacity 00:12:33.339 Placement Identifier: Valid 00:12:33.339 NSID: Valid 00:12:33.339 Location: Valid 00:12:33.339 Placement Identifier: 0 00:12:33.339 Event Timestamp: 8 00:12:33.339 Namespace Identifier: 1 00:12:33.339 Reclaim Group Identifier: 0 00:12:33.339 Reclaim Unit Handle Identifier: 0 00:12:33.339 00:12:33.339 FDP test passed 00:12:33.339 00:12:33.339 real 0m0.307s 00:12:33.339 user 0m0.106s 00:12:33.339 sys 0m0.099s 00:12:33.339 21:43:41 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:33.339 ************************************ 00:12:33.339 END TEST nvme_flexible_data_placement 00:12:33.339 ************************************ 00:12:33.339 21:43:41 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:12:33.599 00:12:33.599 real 0m9.185s 00:12:33.599 user 0m1.677s 00:12:33.599 sys 0m2.586s 00:12:33.599 21:43:41 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:33.599 ************************************ 00:12:33.599 END TEST nvme_fdp 00:12:33.599 ************************************ 00:12:33.599 21:43:41 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:12:33.599 21:43:41 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:12:33.599 21:43:41 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:12:33.599 21:43:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:33.599 21:43:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:33.599 21:43:41 -- common/autotest_common.sh@10 -- # set +x 00:12:33.599 ************************************ 00:12:33.599 START TEST nvme_rpc 00:12:33.599 ************************************ 00:12:33.599 21:43:41 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:12:33.599 * Looking for test storage... 00:12:33.599 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:33.599 21:43:41 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:33.599 21:43:41 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:12:33.599 21:43:41 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:33.859 21:43:41 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:33.859 21:43:41 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:33.859 21:43:41 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:33.859 21:43:41 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:33.859 21:43:41 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:33.859 21:43:41 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:33.859 21:43:41 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:33.859 21:43:41 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:33.859 21:43:41 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:33.859 21:43:41 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:33.859 21:43:41 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:33.859 21:43:41 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:33.859 21:43:41 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:33.859 21:43:41 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:12:33.859 21:43:41 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:33.859 21:43:41 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:33.859 21:43:41 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:33.859 21:43:41 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:12:33.859 21:43:41 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:33.859 21:43:41 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:12:33.859 21:43:41 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:33.859 21:43:41 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:33.859 21:43:41 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:12:33.859 21:43:41 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:33.859 21:43:41 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:12:33.859 21:43:41 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:33.859 21:43:41 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:33.859 21:43:41 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:33.859 21:43:41 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:12:33.859 21:43:41 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:33.859 21:43:41 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:33.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.859 --rc genhtml_branch_coverage=1 00:12:33.859 --rc genhtml_function_coverage=1 00:12:33.859 --rc genhtml_legend=1 00:12:33.859 --rc geninfo_all_blocks=1 00:12:33.859 --rc geninfo_unexecuted_blocks=1 00:12:33.859 00:12:33.859 ' 00:12:33.859 21:43:41 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:33.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.859 --rc genhtml_branch_coverage=1 00:12:33.859 --rc genhtml_function_coverage=1 00:12:33.859 --rc genhtml_legend=1 00:12:33.859 --rc geninfo_all_blocks=1 00:12:33.859 --rc geninfo_unexecuted_blocks=1 00:12:33.859 00:12:33.859 ' 00:12:33.859 21:43:41 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:33.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.859 --rc genhtml_branch_coverage=1 00:12:33.859 --rc genhtml_function_coverage=1 00:12:33.859 --rc genhtml_legend=1 00:12:33.859 --rc geninfo_all_blocks=1 00:12:33.859 --rc geninfo_unexecuted_blocks=1 00:12:33.859 00:12:33.859 ' 00:12:33.859 21:43:41 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:33.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.859 --rc genhtml_branch_coverage=1 00:12:33.860 --rc genhtml_function_coverage=1 00:12:33.860 --rc genhtml_legend=1 00:12:33.860 --rc geninfo_all_blocks=1 00:12:33.860 --rc geninfo_unexecuted_blocks=1 00:12:33.860 00:12:33.860 ' 00:12:33.860 21:43:41 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:33.860 21:43:41 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:12:33.860 21:43:41 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:12:33.860 21:43:41 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:12:33.860 21:43:41 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:12:33.860 21:43:41 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:12:33.860 21:43:41 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:33.860 21:43:41 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:12:33.860 21:43:41 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:33.860 21:43:41 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:33.860 21:43:41 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:33.860 21:43:41 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:33.860 21:43:41 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:33.860 21:43:41 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:12:33.860 21:43:41 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:12:33.860 21:43:41 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:12:33.860 21:43:41 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=68490 00:12:33.860 21:43:41 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:12:33.860 21:43:41 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 68490 00:12:33.860 21:43:41 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 68490 ']' 00:12:33.860 21:43:41 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.860 21:43:41 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:33.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.860 21:43:41 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.860 21:43:41 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:33.860 21:43:41 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.119 [2024-12-10 21:43:41.617790] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:12:34.119 [2024-12-10 21:43:41.617930] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68490 ] 00:12:34.119 [2024-12-10 21:43:41.803279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:34.378 [2024-12-10 21:43:41.939996] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.378 [2024-12-10 21:43:41.940039] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.314 21:43:42 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:35.314 21:43:42 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:35.314 21:43:42 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:12:35.572 Nvme0n1 00:12:35.572 21:43:43 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:12:35.572 21:43:43 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:12:35.832 request: 00:12:35.832 { 00:12:35.832 "bdev_name": "Nvme0n1", 00:12:35.832 "filename": "non_existing_file", 00:12:35.832 "method": "bdev_nvme_apply_firmware", 00:12:35.832 "req_id": 1 00:12:35.832 } 00:12:35.832 Got JSON-RPC error response 00:12:35.832 response: 00:12:35.832 { 00:12:35.832 "code": -32603, 00:12:35.832 "message": "open file failed." 00:12:35.832 } 00:12:35.832 21:43:43 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:12:35.832 21:43:43 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:12:35.832 21:43:43 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:12:36.091 21:43:43 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:36.091 21:43:43 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 68490 00:12:36.091 21:43:43 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 68490 ']' 00:12:36.091 21:43:43 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 68490 00:12:36.091 21:43:43 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:36.091 21:43:43 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:36.091 21:43:43 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68490 00:12:36.091 21:43:43 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:36.091 killing process with pid 68490 00:12:36.091 21:43:43 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:36.091 21:43:43 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68490' 00:12:36.091 21:43:43 nvme_rpc -- common/autotest_common.sh@973 -- # kill 68490 00:12:36.091 21:43:43 nvme_rpc -- common/autotest_common.sh@978 -- # wait 68490 00:12:38.626 00:12:38.626 real 0m5.065s 00:12:38.626 user 0m9.179s 00:12:38.626 sys 0m0.885s 00:12:38.626 21:43:46 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:38.626 ************************************ 00:12:38.626 END TEST nvme_rpc 00:12:38.626 ************************************ 00:12:38.626 21:43:46 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.626 21:43:46 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:12:38.626 21:43:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:38.626 21:43:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:38.626 21:43:46 -- common/autotest_common.sh@10 -- # set +x 00:12:38.626 ************************************ 00:12:38.626 START TEST nvme_rpc_timeouts 00:12:38.626 ************************************ 00:12:38.626 21:43:46 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:12:38.885 * Looking for test storage... 00:12:38.885 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:38.885 21:43:46 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:38.885 21:43:46 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:12:38.885 21:43:46 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:38.885 21:43:46 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:38.885 21:43:46 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:38.885 21:43:46 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:38.885 21:43:46 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:38.885 21:43:46 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:12:38.885 21:43:46 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:12:38.885 21:43:46 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:12:38.885 21:43:46 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:12:38.885 21:43:46 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:12:38.885 21:43:46 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:12:38.885 21:43:46 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:12:38.885 21:43:46 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:38.885 21:43:46 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:12:38.885 21:43:46 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:12:38.885 21:43:46 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:38.885 21:43:46 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:38.885 21:43:46 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:12:38.885 21:43:46 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:12:38.885 21:43:46 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:38.885 21:43:46 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:12:38.885 21:43:46 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:12:38.885 21:43:46 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:12:38.885 21:43:46 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:12:38.885 21:43:46 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:38.885 21:43:46 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:12:38.885 21:43:46 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:12:38.885 21:43:46 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:38.885 21:43:46 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:38.885 21:43:46 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:12:38.885 21:43:46 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:38.885 21:43:46 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:38.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.885 --rc genhtml_branch_coverage=1 00:12:38.885 --rc genhtml_function_coverage=1 00:12:38.885 --rc genhtml_legend=1 00:12:38.885 --rc geninfo_all_blocks=1 00:12:38.885 --rc geninfo_unexecuted_blocks=1 00:12:38.885 00:12:38.885 ' 00:12:38.885 21:43:46 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:38.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.885 --rc genhtml_branch_coverage=1 00:12:38.885 --rc genhtml_function_coverage=1 00:12:38.885 --rc genhtml_legend=1 00:12:38.885 --rc geninfo_all_blocks=1 00:12:38.885 --rc geninfo_unexecuted_blocks=1 00:12:38.885 00:12:38.885 ' 00:12:38.885 21:43:46 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:38.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.885 --rc genhtml_branch_coverage=1 00:12:38.885 --rc genhtml_function_coverage=1 00:12:38.885 --rc genhtml_legend=1 00:12:38.885 --rc geninfo_all_blocks=1 00:12:38.885 --rc geninfo_unexecuted_blocks=1 00:12:38.885 00:12:38.885 ' 00:12:38.885 21:43:46 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:38.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.885 --rc genhtml_branch_coverage=1 00:12:38.885 --rc genhtml_function_coverage=1 00:12:38.885 --rc genhtml_legend=1 00:12:38.885 --rc geninfo_all_blocks=1 00:12:38.885 --rc geninfo_unexecuted_blocks=1 00:12:38.885 00:12:38.885 ' 00:12:38.885 21:43:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:38.885 21:43:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_68577 00:12:38.885 21:43:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_68577 00:12:38.885 21:43:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=68609 00:12:38.885 21:43:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:12:38.885 21:43:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:12:38.885 21:43:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 68609 00:12:38.885 21:43:46 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 68609 ']' 00:12:38.885 21:43:46 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.885 21:43:46 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:38.885 21:43:46 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.885 21:43:46 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:38.885 21:43:46 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:12:39.144 [2024-12-10 21:43:46.639945] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:12:39.144 [2024-12-10 21:43:46.640118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68609 ] 00:12:39.144 [2024-12-10 21:43:46.826247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:39.403 [2024-12-10 21:43:46.972397] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.403 [2024-12-10 21:43:46.972430] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:40.350 21:43:47 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:40.350 Checking default timeout settings: 00:12:40.350 21:43:47 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:12:40.350 21:43:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:12:40.350 21:43:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:40.610 Making settings changes with rpc: 00:12:40.610 21:43:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:12:40.610 21:43:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:12:40.869 Check default vs. modified settings: 00:12:40.869 21:43:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:12:40.869 21:43:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:41.439 21:43:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:12:41.439 21:43:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:41.439 21:43:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_68577 00:12:41.439 21:43:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:41.439 21:43:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:41.439 21:43:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:12:41.439 21:43:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_68577 00:12:41.439 21:43:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:41.439 21:43:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:41.439 21:43:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:12:41.439 Setting action_on_timeout is changed as expected. 00:12:41.439 21:43:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:12:41.439 21:43:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:12:41.439 21:43:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:41.439 21:43:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_68577 00:12:41.439 21:43:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:41.439 21:43:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:41.439 21:43:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:41.439 21:43:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_68577 00:12:41.439 21:43:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:41.439 21:43:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:41.439 21:43:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:12:41.439 Setting timeout_us is changed as expected. 00:12:41.439 21:43:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:12:41.439 21:43:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:12:41.439 21:43:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:41.439 21:43:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_68577 00:12:41.439 21:43:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:41.439 21:43:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:41.439 21:43:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:41.439 21:43:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:41.439 21:43:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_68577 00:12:41.439 21:43:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:41.439 21:43:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:12:41.439 Setting timeout_admin_us is changed as expected. 00:12:41.439 21:43:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:12:41.439 21:43:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:12:41.439 21:43:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:12:41.439 21:43:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_68577 /tmp/settings_modified_68577 00:12:41.439 21:43:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 68609 00:12:41.439 21:43:48 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 68609 ']' 00:12:41.439 21:43:48 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 68609 00:12:41.439 21:43:48 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:12:41.439 21:43:48 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:41.439 21:43:48 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68609 00:12:41.439 killing process with pid 68609 00:12:41.439 21:43:49 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:41.439 21:43:49 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:41.439 21:43:49 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68609' 00:12:41.439 21:43:49 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 68609 00:12:41.439 21:43:49 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 68609 00:12:43.989 RPC TIMEOUT SETTING TEST PASSED. 00:12:43.989 21:43:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:12:43.989 ************************************ 00:12:43.989 END TEST nvme_rpc_timeouts 00:12:43.989 ************************************ 00:12:43.989 00:12:43.989 real 0m5.354s 00:12:43.989 user 0m9.940s 00:12:43.989 sys 0m0.952s 00:12:43.989 21:43:51 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:43.989 21:43:51 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:12:44.270 21:43:51 -- spdk/autotest.sh@239 -- # uname -s 00:12:44.270 21:43:51 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:12:44.270 21:43:51 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:12:44.270 21:43:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:44.270 21:43:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:44.270 21:43:51 -- common/autotest_common.sh@10 -- # set +x 00:12:44.270 ************************************ 00:12:44.270 START TEST sw_hotplug 00:12:44.270 ************************************ 00:12:44.270 21:43:51 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:12:44.270 * Looking for test storage... 00:12:44.270 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:44.270 21:43:51 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:44.270 21:43:51 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:12:44.270 21:43:51 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:44.270 21:43:51 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:44.270 21:43:51 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:44.270 21:43:51 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:44.270 21:43:51 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:44.270 21:43:51 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:12:44.270 21:43:51 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:12:44.270 21:43:51 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:12:44.270 21:43:51 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:12:44.270 21:43:51 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:12:44.270 21:43:51 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:12:44.270 21:43:51 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:12:44.270 21:43:51 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:44.270 21:43:51 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:12:44.270 21:43:51 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:12:44.270 21:43:51 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:44.270 21:43:51 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:44.270 21:43:51 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:12:44.270 21:43:51 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:12:44.270 21:43:51 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:44.270 21:43:51 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:12:44.270 21:43:51 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:12:44.270 21:43:51 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:12:44.270 21:43:51 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:12:44.270 21:43:51 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:44.270 21:43:51 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:12:44.271 21:43:51 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:12:44.271 21:43:51 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:44.271 21:43:51 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:44.271 21:43:51 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:12:44.271 21:43:51 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:44.271 21:43:51 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:44.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.271 --rc genhtml_branch_coverage=1 00:12:44.271 --rc genhtml_function_coverage=1 00:12:44.271 --rc genhtml_legend=1 00:12:44.271 --rc geninfo_all_blocks=1 00:12:44.271 --rc geninfo_unexecuted_blocks=1 00:12:44.271 00:12:44.271 ' 00:12:44.271 21:43:51 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:44.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.271 --rc genhtml_branch_coverage=1 00:12:44.271 --rc genhtml_function_coverage=1 00:12:44.271 --rc genhtml_legend=1 00:12:44.271 --rc geninfo_all_blocks=1 00:12:44.271 --rc geninfo_unexecuted_blocks=1 00:12:44.271 00:12:44.271 ' 00:12:44.271 21:43:51 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:44.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.271 --rc genhtml_branch_coverage=1 00:12:44.271 --rc genhtml_function_coverage=1 00:12:44.271 --rc genhtml_legend=1 00:12:44.271 --rc geninfo_all_blocks=1 00:12:44.271 --rc geninfo_unexecuted_blocks=1 00:12:44.271 00:12:44.271 ' 00:12:44.271 21:43:51 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:44.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.271 --rc genhtml_branch_coverage=1 00:12:44.271 --rc genhtml_function_coverage=1 00:12:44.271 --rc genhtml_legend=1 00:12:44.271 --rc geninfo_all_blocks=1 00:12:44.271 --rc geninfo_unexecuted_blocks=1 00:12:44.271 00:12:44.271 ' 00:12:44.271 21:43:51 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:44.839 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:45.097 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:45.097 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:45.097 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:45.097 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:45.097 21:43:52 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:12:45.097 21:43:52 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:12:45.097 21:43:52 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:12:45.097 21:43:52 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:12:45.097 21:43:52 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:12:45.097 21:43:52 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:12:45.097 21:43:52 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:12:45.097 21:43:52 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:12:45.097 21:43:52 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:12:45.097 21:43:52 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:12:45.097 21:43:52 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:12:45.097 21:43:52 sw_hotplug -- scripts/common.sh@233 -- # local class 00:12:45.097 21:43:52 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:12:45.097 21:43:52 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:12:45.097 21:43:52 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:12:45.097 21:43:52 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:12:45.097 21:43:52 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:12:45.097 21:43:52 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:12:45.097 21:43:52 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:12:45.097 21:43:52 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:12:45.097 21:43:52 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:12:45.097 21:43:52 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:12:45.097 21:43:52 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:12:45.098 21:43:52 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:12:45.098 21:43:52 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:12:45.098 21:43:52 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:12:45.098 21:43:52 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:45.098 21:43:52 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:12:45.098 21:43:52 sw_hotplug -- scripts/common.sh@18 -- # local i 00:12:45.098 21:43:52 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@18 -- # local i 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@18 -- # local i 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@18 -- # local i 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:12:45.356 21:43:52 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:45.356 21:43:52 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:12:45.356 21:43:52 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:12:45.356 21:43:52 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:45.923 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:46.182 Waiting for block devices as requested 00:12:46.182 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:46.182 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:46.441 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:46.441 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:51.729 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:51.729 21:43:59 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:12:51.729 21:43:59 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:52.298 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:12:52.298 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:52.298 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:12:52.866 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:12:53.126 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:53.126 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:53.126 21:44:00 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:12:53.126 21:44:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:53.385 21:44:00 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:12:53.385 21:44:00 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:12:53.385 21:44:00 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=69505 00:12:53.385 21:44:00 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:12:53.385 21:44:00 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:12:53.385 21:44:00 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:53.385 21:44:00 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:12:53.385 21:44:00 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:12:53.385 21:44:00 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:12:53.385 21:44:00 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:12:53.385 21:44:00 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:12:53.385 21:44:00 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:12:53.385 21:44:00 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:53.385 21:44:00 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:53.385 21:44:00 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:12:53.385 21:44:00 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:53.385 21:44:00 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:53.644 Initializing NVMe Controllers 00:12:53.644 Attaching to 0000:00:10.0 00:12:53.644 Attaching to 0000:00:11.0 00:12:53.644 Attached to 0000:00:11.0 00:12:53.644 Attached to 0000:00:10.0 00:12:53.644 Initialization complete. Starting I/O... 00:12:53.644 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:12:53.644 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:12:53.644 00:12:54.604 QEMU NVMe Ctrl (12341 ): 1336 I/Os completed (+1336) 00:12:54.604 QEMU NVMe Ctrl (12340 ): 1329 I/Os completed (+1329) 00:12:54.604 00:12:55.541 QEMU NVMe Ctrl (12341 ): 3128 I/Os completed (+1792) 00:12:55.541 QEMU NVMe Ctrl (12340 ): 3121 I/Os completed (+1792) 00:12:55.541 00:12:56.476 QEMU NVMe Ctrl (12341 ): 5280 I/Os completed (+2152) 00:12:56.476 QEMU NVMe Ctrl (12340 ): 5273 I/Os completed (+2152) 00:12:56.476 00:12:57.853 QEMU NVMe Ctrl (12341 ): 7112 I/Os completed (+1832) 00:12:57.853 QEMU NVMe Ctrl (12340 ): 7111 I/Os completed (+1838) 00:12:57.853 00:12:58.789 QEMU NVMe Ctrl (12341 ): 9012 I/Os completed (+1900) 00:12:58.789 QEMU NVMe Ctrl (12340 ): 9011 I/Os completed (+1900) 00:12:58.789 00:12:59.357 21:44:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:59.357 21:44:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:59.357 21:44:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:59.357 [2024-12-10 21:44:06.959337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:59.357 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:59.357 [2024-12-10 21:44:06.961584] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.357 [2024-12-10 21:44:06.961671] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.357 [2024-12-10 21:44:06.961696] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.357 [2024-12-10 21:44:06.961722] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.357 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:59.357 [2024-12-10 21:44:06.965001] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.357 [2024-12-10 21:44:06.965071] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.357 [2024-12-10 21:44:06.965094] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.357 [2024-12-10 21:44:06.965115] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.357 21:44:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:59.357 21:44:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:59.357 [2024-12-10 21:44:07.004030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:59.357 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:59.357 [2024-12-10 21:44:07.005850] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.357 [2024-12-10 21:44:07.005910] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.357 [2024-12-10 21:44:07.005944] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.357 [2024-12-10 21:44:07.005966] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.357 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:59.357 [2024-12-10 21:44:07.009196] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.357 [2024-12-10 21:44:07.009248] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.357 [2024-12-10 21:44:07.009274] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.357 [2024-12-10 21:44:07.009295] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.357 EAL: Cannot open sysfs resource 00:12:59.357 EAL: pci_scan_one(): cannot parse resource 00:12:59.357 EAL: Scan for (pci) bus failed. 00:12:59.357 21:44:07 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:59.357 21:44:07 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:59.617 21:44:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:59.617 21:44:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:59.617 21:44:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:59.617 00:12:59.617 21:44:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:59.617 21:44:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:59.617 21:44:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:59.617 21:44:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:59.617 21:44:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:59.617 Attaching to 0000:00:10.0 00:12:59.617 Attached to 0000:00:10.0 00:12:59.876 21:44:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:59.876 21:44:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:59.876 21:44:07 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:59.876 Attaching to 0000:00:11.0 00:12:59.876 Attached to 0000:00:11.0 00:13:00.812 QEMU NVMe Ctrl (12340 ): 1676 I/Os completed (+1676) 00:13:00.812 QEMU NVMe Ctrl (12341 ): 1469 I/Os completed (+1469) 00:13:00.812 00:13:01.748 QEMU NVMe Ctrl (12340 ): 3524 I/Os completed (+1848) 00:13:01.748 QEMU NVMe Ctrl (12341 ): 3321 I/Os completed (+1852) 00:13:01.748 00:13:02.684 QEMU NVMe Ctrl (12340 ): 5260 I/Os completed (+1736) 00:13:02.684 QEMU NVMe Ctrl (12341 ): 5064 I/Os completed (+1743) 00:13:02.684 00:13:03.621 QEMU NVMe Ctrl (12340 ): 7068 I/Os completed (+1808) 00:13:03.621 QEMU NVMe Ctrl (12341 ): 6872 I/Os completed (+1808) 00:13:03.621 00:13:04.555 QEMU NVMe Ctrl (12340 ): 8880 I/Os completed (+1812) 00:13:04.555 QEMU NVMe Ctrl (12341 ): 8694 I/Os completed (+1822) 00:13:04.555 00:13:05.491 QEMU NVMe Ctrl (12340 ): 10592 I/Os completed (+1712) 00:13:05.491 QEMU NVMe Ctrl (12341 ): 10415 I/Os completed (+1721) 00:13:05.491 00:13:06.872 QEMU NVMe Ctrl (12340 ): 12307 I/Os completed (+1715) 00:13:06.872 QEMU NVMe Ctrl (12341 ): 12182 I/Os completed (+1767) 00:13:06.872 00:13:07.809 QEMU NVMe Ctrl (12340 ): 14131 I/Os completed (+1824) 00:13:07.809 QEMU NVMe Ctrl (12341 ): 14008 I/Os completed (+1826) 00:13:07.809 00:13:08.746 QEMU NVMe Ctrl (12340 ): 16323 I/Os completed (+2192) 00:13:08.746 QEMU NVMe Ctrl (12341 ): 16200 I/Os completed (+2192) 00:13:08.746 00:13:09.681 QEMU NVMe Ctrl (12340 ): 18483 I/Os completed (+2160) 00:13:09.681 QEMU NVMe Ctrl (12341 ): 18360 I/Os completed (+2160) 00:13:09.681 00:13:10.617 QEMU NVMe Ctrl (12340 ): 20647 I/Os completed (+2164) 00:13:10.617 QEMU NVMe Ctrl (12341 ): 20524 I/Os completed (+2164) 00:13:10.617 00:13:11.554 QEMU NVMe Ctrl (12340 ): 22811 I/Os completed (+2164) 00:13:11.554 QEMU NVMe Ctrl (12341 ): 22688 I/Os completed (+2164) 00:13:11.554 00:13:11.813 21:44:19 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:13:11.813 21:44:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:11.813 21:44:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:11.813 21:44:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:11.813 [2024-12-10 21:44:19.409421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:11.813 Controller removed: QEMU NVMe Ctrl (12340 ) 00:13:11.813 [2024-12-10 21:44:19.411150] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:11.813 [2024-12-10 21:44:19.411211] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:11.813 [2024-12-10 21:44:19.411241] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:11.813 [2024-12-10 21:44:19.411264] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:11.813 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:11.813 [2024-12-10 21:44:19.416623] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:11.813 [2024-12-10 21:44:19.416681] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:11.813 [2024-12-10 21:44:19.416700] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:11.813 [2024-12-10 21:44:19.416720] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:11.813 21:44:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:11.813 21:44:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:11.813 [2024-12-10 21:44:19.450042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:11.813 Controller removed: QEMU NVMe Ctrl (12341 ) 00:13:11.813 [2024-12-10 21:44:19.451631] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:11.813 [2024-12-10 21:44:19.451674] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:11.813 [2024-12-10 21:44:19.451702] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:11.813 [2024-12-10 21:44:19.451721] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:11.813 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:11.813 [2024-12-10 21:44:19.454385] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:11.813 [2024-12-10 21:44:19.454425] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:11.813 [2024-12-10 21:44:19.454446] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:11.813 [2024-12-10 21:44:19.454466] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:11.813 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:13:11.813 EAL: Scan for (pci) bus failed. 00:13:11.813 21:44:19 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:13:11.813 21:44:19 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:12.073 21:44:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:12.073 21:44:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:12.073 21:44:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:12.073 21:44:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:12.073 21:44:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:12.073 21:44:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:12.073 21:44:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:12.073 21:44:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:12.073 Attaching to 0000:00:10.0 00:13:12.073 Attached to 0000:00:10.0 00:13:12.073 21:44:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:12.073 21:44:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:12.073 21:44:19 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:12.073 Attaching to 0000:00:11.0 00:13:12.073 Attached to 0000:00:11.0 00:13:12.643 QEMU NVMe Ctrl (12340 ): 1020 I/Os completed (+1020) 00:13:12.643 QEMU NVMe Ctrl (12341 ): 794 I/Os completed (+794) 00:13:12.643 00:13:13.581 QEMU NVMe Ctrl (12340 ): 3152 I/Os completed (+2132) 00:13:13.581 QEMU NVMe Ctrl (12341 ): 2965 I/Os completed (+2171) 00:13:13.581 00:13:14.521 QEMU NVMe Ctrl (12340 ): 5308 I/Os completed (+2156) 00:13:14.521 QEMU NVMe Ctrl (12341 ): 5121 I/Os completed (+2156) 00:13:14.521 00:13:15.461 QEMU NVMe Ctrl (12340 ): 7440 I/Os completed (+2132) 00:13:15.461 QEMU NVMe Ctrl (12341 ): 7253 I/Os completed (+2132) 00:13:15.461 00:13:16.840 QEMU NVMe Ctrl (12340 ): 9568 I/Os completed (+2128) 00:13:16.840 QEMU NVMe Ctrl (12341 ): 9381 I/Os completed (+2128) 00:13:16.840 00:13:17.778 QEMU NVMe Ctrl (12340 ): 11740 I/Os completed (+2172) 00:13:17.778 QEMU NVMe Ctrl (12341 ): 11554 I/Os completed (+2173) 00:13:17.778 00:13:18.714 QEMU NVMe Ctrl (12340 ): 13868 I/Os completed (+2128) 00:13:18.714 QEMU NVMe Ctrl (12341 ): 13682 I/Os completed (+2128) 00:13:18.714 00:13:19.649 QEMU NVMe Ctrl (12340 ): 15984 I/Os completed (+2116) 00:13:19.649 QEMU NVMe Ctrl (12341 ): 15798 I/Os completed (+2116) 00:13:19.649 00:13:20.584 QEMU NVMe Ctrl (12340 ): 18120 I/Os completed (+2136) 00:13:20.584 QEMU NVMe Ctrl (12341 ): 17935 I/Os completed (+2137) 00:13:20.584 00:13:21.519 QEMU NVMe Ctrl (12340 ): 20268 I/Os completed (+2148) 00:13:21.519 QEMU NVMe Ctrl (12341 ): 20083 I/Os completed (+2148) 00:13:21.519 00:13:22.455 QEMU NVMe Ctrl (12340 ): 22388 I/Os completed (+2120) 00:13:22.455 QEMU NVMe Ctrl (12341 ): 22203 I/Os completed (+2120) 00:13:22.455 00:13:23.833 QEMU NVMe Ctrl (12340 ): 24508 I/Os completed (+2120) 00:13:23.833 QEMU NVMe Ctrl (12341 ): 24323 I/Os completed (+2120) 00:13:23.833 00:13:24.092 21:44:31 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:13:24.092 21:44:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:24.092 21:44:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:24.092 21:44:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:24.092 [2024-12-10 21:44:31.798688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:24.092 Controller removed: QEMU NVMe Ctrl (12340 ) 00:13:24.092 [2024-12-10 21:44:31.800408] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:24.092 [2024-12-10 21:44:31.800468] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:24.092 [2024-12-10 21:44:31.800491] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:24.092 [2024-12-10 21:44:31.800514] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:24.092 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:24.092 [2024-12-10 21:44:31.803452] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:24.092 [2024-12-10 21:44:31.803508] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:24.092 [2024-12-10 21:44:31.803528] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:24.092 [2024-12-10 21:44:31.803547] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:24.351 21:44:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:24.351 21:44:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:24.351 [2024-12-10 21:44:31.839057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:24.351 Controller removed: QEMU NVMe Ctrl (12341 ) 00:13:24.351 [2024-12-10 21:44:31.840627] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:24.351 [2024-12-10 21:44:31.840683] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:24.351 [2024-12-10 21:44:31.840711] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:24.351 [2024-12-10 21:44:31.840735] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:24.351 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:24.351 [2024-12-10 21:44:31.843446] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:24.351 [2024-12-10 21:44:31.843489] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:24.351 [2024-12-10 21:44:31.843513] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:24.351 [2024-12-10 21:44:31.843530] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:24.351 21:44:31 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:13:24.351 21:44:31 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:24.351 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:13:24.351 EAL: Scan for (pci) bus failed. 00:13:24.351 21:44:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:24.351 21:44:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:24.351 21:44:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:24.351 21:44:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:24.351 21:44:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:24.351 21:44:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:24.351 21:44:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:24.351 21:44:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:24.351 Attaching to 0000:00:10.0 00:13:24.351 Attached to 0000:00:10.0 00:13:24.611 21:44:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:24.611 QEMU NVMe Ctrl (12340 ): 192 I/Os completed (+192) 00:13:24.611 00:13:24.611 21:44:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:24.611 21:44:32 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:24.611 Attaching to 0000:00:11.0 00:13:24.611 Attached to 0000:00:11.0 00:13:24.611 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:24.611 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:24.611 [2024-12-10 21:44:32.172343] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:13:36.822 21:44:44 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:13:36.822 21:44:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:36.822 21:44:44 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.21 00:13:36.822 21:44:44 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.21 00:13:36.822 21:44:44 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:13:36.822 21:44:44 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.21 00:13:36.822 21:44:44 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.21 2 00:13:36.823 remove_attach_helper took 43.21s to complete (handling 2 nvme drive(s)) 21:44:44 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:13:43.427 21:44:50 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 69505 00:13:43.427 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (69505) - No such process 00:13:43.427 21:44:50 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 69505 00:13:43.427 21:44:50 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:13:43.427 21:44:50 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:13:43.427 21:44:50 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:13:43.427 21:44:50 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:43.427 21:44:50 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=70048 00:13:43.427 21:44:50 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:13:43.427 21:44:50 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 70048 00:13:43.427 21:44:50 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 70048 ']' 00:13:43.427 21:44:50 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.427 21:44:50 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:43.427 21:44:50 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.427 21:44:50 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:43.427 21:44:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:43.427 [2024-12-10 21:44:50.308558] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:13:43.427 [2024-12-10 21:44:50.308748] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70048 ] 00:13:43.427 [2024-12-10 21:44:50.518189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.427 [2024-12-10 21:44:50.657556] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.994 21:44:51 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:43.994 21:44:51 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:13:43.994 21:44:51 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:13:43.994 21:44:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.994 21:44:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:43.994 21:44:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.994 21:44:51 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:13:43.994 21:44:51 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:43.994 21:44:51 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:13:43.994 21:44:51 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:13:43.994 21:44:51 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:13:43.994 21:44:51 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:13:43.994 21:44:51 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:13:43.994 21:44:51 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:13:43.994 21:44:51 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:43.994 21:44:51 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:43.994 21:44:51 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:13:43.994 21:44:51 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:43.994 21:44:51 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:50.560 21:44:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:50.560 21:44:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:50.560 21:44:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:50.560 21:44:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:50.560 21:44:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:50.560 [2024-12-10 21:44:57.780578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:50.560 21:44:57 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:50.560 21:44:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:50.560 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:13:50.560 EAL: Scan for (pci) bus failed. 00:13:50.560 [2024-12-10 21:44:57.783499] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:50.560 [2024-12-10 21:44:57.783543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:50.560 [2024-12-10 21:44:57.783562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.560 [2024-12-10 21:44:57.783591] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:50.560 [2024-12-10 21:44:57.783604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:50.560 [2024-12-10 21:44:57.783619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.560 [2024-12-10 21:44:57.783633] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:50.560 [2024-12-10 21:44:57.783649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:50.560 [2024-12-10 21:44:57.783661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.560 [2024-12-10 21:44:57.783679] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:50.560 [2024-12-10 21:44:57.783691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:50.560 [2024-12-10 21:44:57.783705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.560 21:44:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:50.560 21:44:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:50.560 21:44:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:50.560 21:44:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:50.560 21:44:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.560 21:44:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:50.560 21:44:57 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.560 21:44:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:50.560 21:44:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:50.560 [2024-12-10 21:44:58.179924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:50.560 [2024-12-10 21:44:58.182681] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:50.560 [2024-12-10 21:44:58.182726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:50.560 [2024-12-10 21:44:58.182748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.560 [2024-12-10 21:44:58.182774] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:50.560 [2024-12-10 21:44:58.182789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:50.560 [2024-12-10 21:44:58.182802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.560 [2024-12-10 21:44:58.182818] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:50.560 [2024-12-10 21:44:58.182830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:50.560 [2024-12-10 21:44:58.182844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.560 [2024-12-10 21:44:58.182858] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:50.560 [2024-12-10 21:44:58.182872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:50.560 [2024-12-10 21:44:58.182883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.819 21:44:58 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:50.819 21:44:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:50.819 21:44:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:50.819 21:44:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:50.819 21:44:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:50.819 21:44:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:50.819 21:44:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.819 21:44:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:50.819 21:44:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.819 21:44:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:50.819 21:44:58 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:50.819 21:44:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:50.819 21:44:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:50.819 21:44:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:51.079 21:44:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:51.079 21:44:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:51.079 21:44:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:51.079 21:44:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:51.079 21:44:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:51.079 21:44:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:51.079 21:44:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:51.079 21:44:58 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:03.287 21:45:10 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:03.287 21:45:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:03.287 21:45:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:03.287 21:45:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:03.287 21:45:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:03.287 21:45:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:03.287 21:45:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.287 21:45:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:03.287 21:45:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.287 21:45:10 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:03.287 21:45:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:03.287 21:45:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:03.287 21:45:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:03.287 21:45:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:03.287 21:45:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:03.287 [2024-12-10 21:45:10.859785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:03.287 21:45:10 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:03.287 21:45:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:03.287 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:14:03.287 EAL: Scan for (pci) bus failed. 00:14:03.287 [2024-12-10 21:45:10.862232] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:03.287 [2024-12-10 21:45:10.862266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:03.287 [2024-12-10 21:45:10.862284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.287 [2024-12-10 21:45:10.862312] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:03.287 [2024-12-10 21:45:10.862325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:03.287 [2024-12-10 21:45:10.862340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.287 [2024-12-10 21:45:10.862353] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:03.287 [2024-12-10 21:45:10.862368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:03.287 [2024-12-10 21:45:10.862381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.287 [2024-12-10 21:45:10.862397] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:03.287 [2024-12-10 21:45:10.862408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:03.287 [2024-12-10 21:45:10.862423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.287 21:45:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:03.287 21:45:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:03.287 21:45:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:03.287 21:45:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:03.287 21:45:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.287 21:45:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:03.287 21:45:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.287 21:45:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:03.287 21:45:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:03.546 [2024-12-10 21:45:11.259163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:03.546 [2024-12-10 21:45:11.261904] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:03.546 [2024-12-10 21:45:11.261948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:03.546 [2024-12-10 21:45:11.261972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.546 [2024-12-10 21:45:11.261998] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:03.546 [2024-12-10 21:45:11.262013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:03.546 [2024-12-10 21:45:11.262025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.546 [2024-12-10 21:45:11.262041] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:03.546 [2024-12-10 21:45:11.262081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:03.546 [2024-12-10 21:45:11.262097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.546 [2024-12-10 21:45:11.262111] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:03.546 [2024-12-10 21:45:11.262125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:03.546 [2024-12-10 21:45:11.262137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.805 21:45:11 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:03.805 21:45:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:03.805 21:45:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:03.805 21:45:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:03.805 21:45:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:03.805 21:45:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:03.805 21:45:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.805 21:45:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:03.805 21:45:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.805 21:45:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:03.805 21:45:11 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:04.064 21:45:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:04.064 21:45:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:04.064 21:45:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:04.064 21:45:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:04.064 21:45:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:04.064 21:45:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:04.064 21:45:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:04.064 21:45:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:04.064 21:45:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:04.064 21:45:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:04.064 21:45:11 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:16.287 21:45:23 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:16.287 21:45:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:16.287 21:45:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:16.287 21:45:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:16.287 21:45:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:16.287 21:45:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:16.287 21:45:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.287 21:45:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:16.287 21:45:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.288 21:45:23 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:16.288 21:45:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:16.288 21:45:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:16.288 21:45:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:16.288 21:45:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:16.288 21:45:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:16.288 21:45:23 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:16.288 21:45:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:16.288 21:45:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:16.288 21:45:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:16.288 21:45:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:16.288 21:45:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:16.288 21:45:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.288 21:45:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:16.288 [2024-12-10 21:45:23.938737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:16.288 [2024-12-10 21:45:23.941445] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:16.288 [2024-12-10 21:45:23.941496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:16.288 [2024-12-10 21:45:23.941515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:16.288 [2024-12-10 21:45:23.941542] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:16.288 [2024-12-10 21:45:23.941555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:16.288 [2024-12-10 21:45:23.941573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:16.288 [2024-12-10 21:45:23.941587] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:16.288 [2024-12-10 21:45:23.941602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:16.288 [2024-12-10 21:45:23.941613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:16.288 [2024-12-10 21:45:23.941629] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:16.288 [2024-12-10 21:45:23.941640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:16.288 [2024-12-10 21:45:23.941654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:16.288 21:45:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.288 21:45:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:16.288 21:45:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:16.856 [2024-12-10 21:45:24.338097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:16.856 [2024-12-10 21:45:24.340708] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:16.856 [2024-12-10 21:45:24.340751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:16.856 [2024-12-10 21:45:24.340771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:16.856 [2024-12-10 21:45:24.340797] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:16.856 [2024-12-10 21:45:24.340811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:16.856 [2024-12-10 21:45:24.340824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:16.856 [2024-12-10 21:45:24.340840] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:16.856 [2024-12-10 21:45:24.340852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:16.856 [2024-12-10 21:45:24.340871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:16.856 [2024-12-10 21:45:24.340884] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:16.856 [2024-12-10 21:45:24.340898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:16.856 [2024-12-10 21:45:24.340910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:16.856 21:45:24 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:16.856 21:45:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:16.856 21:45:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:16.856 21:45:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:16.856 21:45:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:16.856 21:45:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:16.856 21:45:24 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.856 21:45:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:16.856 21:45:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.856 21:45:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:16.856 21:45:24 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:17.115 21:45:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:17.115 21:45:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:17.115 21:45:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:17.115 21:45:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:17.115 21:45:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:17.115 21:45:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:17.115 21:45:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:17.115 21:45:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:17.115 21:45:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:17.374 21:45:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:17.374 21:45:24 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:29.581 21:45:36 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:29.581 21:45:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:29.581 21:45:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:29.581 21:45:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:29.581 21:45:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:29.581 21:45:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:29.581 21:45:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.581 21:45:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:29.581 21:45:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.581 21:45:36 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:29.581 21:45:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:29.581 21:45:36 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.21 00:14:29.581 21:45:36 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.21 00:14:29.581 21:45:36 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:14:29.581 21:45:36 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.21 00:14:29.581 21:45:36 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.21 2 00:14:29.581 remove_attach_helper took 45.21s to complete (handling 2 nvme drive(s)) 21:45:36 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:14:29.581 21:45:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.581 21:45:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:29.581 21:45:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.581 21:45:36 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:14:29.581 21:45:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.581 21:45:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:29.581 21:45:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.581 21:45:36 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:14:29.581 21:45:36 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:14:29.581 21:45:36 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:14:29.581 21:45:36 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:14:29.581 21:45:36 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:14:29.581 21:45:36 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:14:29.581 21:45:36 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:14:29.581 21:45:36 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:14:29.581 21:45:36 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:14:29.581 21:45:36 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:14:29.581 21:45:36 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:14:29.581 21:45:36 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:14:29.581 21:45:36 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:14:36.149 21:45:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:36.149 21:45:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:36.149 21:45:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:36.149 21:45:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:36.149 21:45:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:36.149 21:45:43 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:36.149 21:45:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:36.149 21:45:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:36.149 21:45:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:36.149 21:45:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:36.149 21:45:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:36.149 21:45:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.149 21:45:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:36.149 [2024-12-10 21:45:43.022429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:36.149 [2024-12-10 21:45:43.024844] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:36.149 [2024-12-10 21:45:43.024893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:36.149 [2024-12-10 21:45:43.024911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.149 [2024-12-10 21:45:43.024939] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:36.149 [2024-12-10 21:45:43.024951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:36.149 [2024-12-10 21:45:43.024966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.149 [2024-12-10 21:45:43.024981] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:36.149 [2024-12-10 21:45:43.024995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:36.149 [2024-12-10 21:45:43.025007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.149 [2024-12-10 21:45:43.025025] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:36.149 [2024-12-10 21:45:43.025036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:36.149 [2024-12-10 21:45:43.025065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.149 21:45:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.149 21:45:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:36.149 21:45:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:36.149 [2024-12-10 21:45:43.421773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:36.149 [2024-12-10 21:45:43.423562] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:36.149 [2024-12-10 21:45:43.423601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:36.149 [2024-12-10 21:45:43.423622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.149 [2024-12-10 21:45:43.423646] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:36.149 [2024-12-10 21:45:43.423661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:36.149 [2024-12-10 21:45:43.423673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.149 [2024-12-10 21:45:43.423690] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:36.149 [2024-12-10 21:45:43.423701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:36.149 [2024-12-10 21:45:43.423716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.149 [2024-12-10 21:45:43.423730] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:36.149 [2024-12-10 21:45:43.423744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:36.149 [2024-12-10 21:45:43.423757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:36.149 21:45:43 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:36.149 21:45:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:36.149 21:45:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:36.149 21:45:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:36.149 21:45:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:36.149 21:45:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:36.149 21:45:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.149 21:45:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:36.149 21:45:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.149 21:45:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:36.149 21:45:43 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:36.149 21:45:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:36.149 21:45:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:36.149 21:45:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:36.149 21:45:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:36.149 21:45:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:36.149 21:45:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:36.149 21:45:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:36.149 21:45:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:36.408 21:45:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:36.408 21:45:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:36.408 21:45:43 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:48.612 21:45:55 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:48.612 21:45:55 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:48.612 21:45:55 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:48.612 21:45:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:48.612 21:45:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:48.612 21:45:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:48.612 21:45:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.612 21:45:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:48.612 21:45:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.612 21:45:56 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:48.612 21:45:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:48.612 21:45:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:48.612 21:45:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:48.612 21:45:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:48.612 21:45:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:48.612 21:45:56 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:48.612 21:45:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:48.612 21:45:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:48.612 21:45:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:48.612 21:45:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:48.612 21:45:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:48.612 21:45:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.612 21:45:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:48.612 [2024-12-10 21:45:56.101402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:48.612 [2024-12-10 21:45:56.103090] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:48.612 [2024-12-10 21:45:56.103141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.612 [2024-12-10 21:45:56.103159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.612 [2024-12-10 21:45:56.103186] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:48.612 [2024-12-10 21:45:56.103198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.613 [2024-12-10 21:45:56.103213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.613 [2024-12-10 21:45:56.103226] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:48.613 [2024-12-10 21:45:56.103241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.613 [2024-12-10 21:45:56.103254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.613 [2024-12-10 21:45:56.103272] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:48.613 [2024-12-10 21:45:56.103284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.613 [2024-12-10 21:45:56.103298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.613 21:45:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.613 21:45:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:48.613 21:45:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:48.871 [2024-12-10 21:45:56.500765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:48.871 [2024-12-10 21:45:56.503114] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:48.871 [2024-12-10 21:45:56.503160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.871 [2024-12-10 21:45:56.503182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.871 [2024-12-10 21:45:56.503206] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:48.871 [2024-12-10 21:45:56.503223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.872 [2024-12-10 21:45:56.503236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.872 [2024-12-10 21:45:56.503253] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:48.872 [2024-12-10 21:45:56.503264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.872 [2024-12-10 21:45:56.503279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.872 [2024-12-10 21:45:56.503292] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:48.872 [2024-12-10 21:45:56.503306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.872 [2024-12-10 21:45:56.503318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.872 [2024-12-10 21:45:56.503339] bdev_nvme.c:5595:aer_cb: *WARNING*: AER request execute failed 00:14:48.872 [2024-12-10 21:45:56.503354] bdev_nvme.c:5595:aer_cb: *WARNING*: AER request execute failed 00:14:48.872 [2024-12-10 21:45:56.503370] bdev_nvme.c:5595:aer_cb: *WARNING*: AER request execute failed 00:14:48.872 [2024-12-10 21:45:56.503381] bdev_nvme.c:5595:aer_cb: *WARNING*: AER request execute failed 00:14:49.130 21:45:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:49.130 21:45:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:49.130 21:45:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:49.130 21:45:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:49.130 21:45:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:49.130 21:45:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:49.130 21:45:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.130 21:45:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:49.130 21:45:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.130 21:45:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:49.130 21:45:56 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:49.130 21:45:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:49.130 21:45:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:49.130 21:45:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:49.389 21:45:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:49.389 21:45:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:49.389 21:45:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:49.389 21:45:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:49.389 21:45:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:49.389 21:45:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:49.389 21:45:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:49.389 21:45:57 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:01.595 21:46:09 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:01.595 21:46:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:01.595 21:46:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:01.595 21:46:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:01.595 21:46:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:01.595 21:46:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:01.595 21:46:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.595 21:46:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:01.595 21:46:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.595 21:46:09 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:01.595 21:46:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:01.595 21:46:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:01.595 21:46:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:01.595 [2024-12-10 21:46:09.080548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:01.595 [2024-12-10 21:46:09.083219] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:01.595 [2024-12-10 21:46:09.083264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.595 [2024-12-10 21:46:09.083281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.595 [2024-12-10 21:46:09.083311] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:01.595 [2024-12-10 21:46:09.083323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.595 [2024-12-10 21:46:09.083338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.595 [2024-12-10 21:46:09.083352] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:01.595 [2024-12-10 21:46:09.083366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.595 [2024-12-10 21:46:09.083378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.595 [2024-12-10 21:46:09.083393] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:01.595 [2024-12-10 21:46:09.083404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.595 [2024-12-10 21:46:09.083419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.595 21:46:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:01.595 21:46:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:01.595 21:46:09 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:01.595 21:46:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:01.595 21:46:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:01.595 21:46:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:01.595 21:46:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:01.595 21:46:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:01.595 21:46:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.595 21:46:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:01.595 21:46:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.595 21:46:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:15:01.595 21:46:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:01.854 [2024-12-10 21:46:09.479891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:01.854 [2024-12-10 21:46:09.482331] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:01.854 [2024-12-10 21:46:09.482375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.854 [2024-12-10 21:46:09.482395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.854 [2024-12-10 21:46:09.482418] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:01.854 [2024-12-10 21:46:09.482433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.854 [2024-12-10 21:46:09.482445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.854 [2024-12-10 21:46:09.482465] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:01.854 [2024-12-10 21:46:09.482478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.854 [2024-12-10 21:46:09.482493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.854 [2024-12-10 21:46:09.482506] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:01.854 [2024-12-10 21:46:09.482520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.854 [2024-12-10 21:46:09.482532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:02.112 21:46:09 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:15:02.112 21:46:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:02.112 21:46:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:02.112 21:46:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:02.112 21:46:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:02.112 21:46:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:02.112 21:46:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.112 21:46:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:02.112 21:46:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.112 21:46:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:02.112 21:46:09 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:02.112 21:46:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:02.112 21:46:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:02.112 21:46:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:02.371 21:46:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:02.371 21:46:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:02.371 21:46:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:02.371 21:46:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:02.371 21:46:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:02.371 21:46:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:02.371 21:46:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:02.371 21:46:10 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:14.575 21:46:22 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:14.575 21:46:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:14.575 21:46:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:14.575 21:46:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:14.575 21:46:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:14.575 21:46:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:14.575 21:46:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.575 21:46:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:14.575 21:46:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.575 21:46:22 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:14.575 21:46:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:14.575 21:46:22 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.16 00:15:14.575 21:46:22 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.16 00:15:14.575 21:46:22 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:15:14.575 21:46:22 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.16 00:15:14.575 21:46:22 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.16 2 00:15:14.575 remove_attach_helper took 45.16s to complete (handling 2 nvme drive(s)) 21:46:22 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:15:14.575 21:46:22 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 70048 00:15:14.575 21:46:22 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 70048 ']' 00:15:14.575 21:46:22 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 70048 00:15:14.575 21:46:22 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:15:14.575 21:46:22 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:14.575 21:46:22 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70048 00:15:14.575 21:46:22 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:14.575 killing process with pid 70048 00:15:14.575 21:46:22 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:14.575 21:46:22 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70048' 00:15:14.575 21:46:22 sw_hotplug -- common/autotest_common.sh@973 -- # kill 70048 00:15:14.575 21:46:22 sw_hotplug -- common/autotest_common.sh@978 -- # wait 70048 00:15:17.107 21:46:24 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:17.672 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:18.238 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:18.238 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:18.238 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:18.238 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:18.497 00:15:18.497 real 2m34.278s 00:15:18.497 user 1m52.131s 00:15:18.497 sys 0m22.566s 00:15:18.497 21:46:26 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:18.497 21:46:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:18.497 ************************************ 00:15:18.497 END TEST sw_hotplug 00:15:18.497 ************************************ 00:15:18.497 21:46:26 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:15:18.497 21:46:26 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:15:18.497 21:46:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:18.497 21:46:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:18.497 21:46:26 -- common/autotest_common.sh@10 -- # set +x 00:15:18.497 ************************************ 00:15:18.497 START TEST nvme_xnvme 00:15:18.497 ************************************ 00:15:18.497 21:46:26 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:15:18.497 * Looking for test storage... 00:15:18.497 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:18.497 21:46:26 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:18.497 21:46:26 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:15:18.497 21:46:26 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:18.759 21:46:26 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:18.759 21:46:26 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:18.759 21:46:26 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:18.759 21:46:26 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:18.759 21:46:26 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:18.759 21:46:26 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:18.759 21:46:26 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:18.759 21:46:26 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:18.759 21:46:26 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:18.759 21:46:26 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:18.759 21:46:26 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:18.759 21:46:26 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:18.759 21:46:26 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:18.759 21:46:26 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:15:18.759 21:46:26 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:18.759 21:46:26 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:18.759 21:46:26 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:18.759 21:46:26 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:18.759 21:46:26 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:18.759 21:46:26 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:18.759 21:46:26 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:18.759 21:46:26 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:18.759 21:46:26 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:18.759 21:46:26 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:18.759 21:46:26 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:18.759 21:46:26 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:18.759 21:46:26 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:18.759 21:46:26 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:18.759 21:46:26 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:15:18.759 21:46:26 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:18.759 21:46:26 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:18.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.759 --rc genhtml_branch_coverage=1 00:15:18.759 --rc genhtml_function_coverage=1 00:15:18.759 --rc genhtml_legend=1 00:15:18.759 --rc geninfo_all_blocks=1 00:15:18.759 --rc geninfo_unexecuted_blocks=1 00:15:18.759 00:15:18.759 ' 00:15:18.759 21:46:26 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:18.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.759 --rc genhtml_branch_coverage=1 00:15:18.759 --rc genhtml_function_coverage=1 00:15:18.759 --rc genhtml_legend=1 00:15:18.759 --rc geninfo_all_blocks=1 00:15:18.759 --rc geninfo_unexecuted_blocks=1 00:15:18.759 00:15:18.759 ' 00:15:18.759 21:46:26 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:18.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.759 --rc genhtml_branch_coverage=1 00:15:18.759 --rc genhtml_function_coverage=1 00:15:18.759 --rc genhtml_legend=1 00:15:18.759 --rc geninfo_all_blocks=1 00:15:18.759 --rc geninfo_unexecuted_blocks=1 00:15:18.759 00:15:18.759 ' 00:15:18.759 21:46:26 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:18.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.759 --rc genhtml_branch_coverage=1 00:15:18.759 --rc genhtml_function_coverage=1 00:15:18.759 --rc genhtml_legend=1 00:15:18.759 --rc geninfo_all_blocks=1 00:15:18.759 --rc geninfo_unexecuted_blocks=1 00:15:18.759 00:15:18.759 ' 00:15:18.759 21:46:26 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:15:18.759 21:46:26 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:15:18.759 21:46:26 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:15:18.759 21:46:26 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:15:18.759 21:46:26 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:15:18.759 21:46:26 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:15:18.759 21:46:26 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:15:18.759 21:46:26 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:15:18.759 21:46:26 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:15:18.759 21:46:26 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:15:18.759 21:46:26 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:15:18.760 21:46:26 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:15:18.760 21:46:26 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:15:18.760 21:46:26 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:15:18.760 21:46:26 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:15:18.760 21:46:26 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:15:18.760 21:46:26 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:15:18.760 21:46:26 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:15:18.760 21:46:26 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:15:18.760 21:46:26 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:15:18.760 21:46:26 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:15:18.760 21:46:26 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:15:18.760 21:46:26 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:15:18.760 21:46:26 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:15:18.760 21:46:26 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:15:18.760 21:46:26 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:15:18.760 21:46:26 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:15:18.760 21:46:26 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:15:18.760 #define SPDK_CONFIG_H 00:15:18.760 #define SPDK_CONFIG_AIO_FSDEV 1 00:15:18.760 #define SPDK_CONFIG_APPS 1 00:15:18.760 #define SPDK_CONFIG_ARCH native 00:15:18.760 #define SPDK_CONFIG_ASAN 1 00:15:18.760 #undef SPDK_CONFIG_AVAHI 00:15:18.760 #undef SPDK_CONFIG_CET 00:15:18.760 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:15:18.760 #define SPDK_CONFIG_COVERAGE 1 00:15:18.760 #define SPDK_CONFIG_CROSS_PREFIX 00:15:18.760 #undef SPDK_CONFIG_CRYPTO 00:15:18.760 #undef SPDK_CONFIG_CRYPTO_MLX5 00:15:18.760 #undef SPDK_CONFIG_CUSTOMOCF 00:15:18.760 #undef SPDK_CONFIG_DAOS 00:15:18.760 #define SPDK_CONFIG_DAOS_DIR 00:15:18.760 #define SPDK_CONFIG_DEBUG 1 00:15:18.760 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:15:18.760 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:15:18.760 #define SPDK_CONFIG_DPDK_INC_DIR 00:15:18.760 #define SPDK_CONFIG_DPDK_LIB_DIR 00:15:18.760 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:15:18.760 #undef SPDK_CONFIG_DPDK_UADK 00:15:18.760 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:15:18.760 #define SPDK_CONFIG_EXAMPLES 1 00:15:18.760 #undef SPDK_CONFIG_FC 00:15:18.760 #define SPDK_CONFIG_FC_PATH 00:15:18.760 #define SPDK_CONFIG_FIO_PLUGIN 1 00:15:18.760 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:15:18.760 #define SPDK_CONFIG_FSDEV 1 00:15:18.760 #undef SPDK_CONFIG_FUSE 00:15:18.760 #undef SPDK_CONFIG_FUZZER 00:15:18.760 #define SPDK_CONFIG_FUZZER_LIB 00:15:18.760 #undef SPDK_CONFIG_GOLANG 00:15:18.760 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:15:18.760 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:15:18.760 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:15:18.760 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:15:18.760 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:15:18.760 #undef SPDK_CONFIG_HAVE_LIBBSD 00:15:18.760 #undef SPDK_CONFIG_HAVE_LZ4 00:15:18.760 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:15:18.760 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:15:18.760 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:15:18.760 #define SPDK_CONFIG_IDXD 1 00:15:18.760 #define SPDK_CONFIG_IDXD_KERNEL 1 00:15:18.760 #undef SPDK_CONFIG_IPSEC_MB 00:15:18.760 #define SPDK_CONFIG_IPSEC_MB_DIR 00:15:18.760 #define SPDK_CONFIG_ISAL 1 00:15:18.760 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:15:18.760 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:15:18.760 #define SPDK_CONFIG_LIBDIR 00:15:18.760 #undef SPDK_CONFIG_LTO 00:15:18.760 #define SPDK_CONFIG_MAX_LCORES 128 00:15:18.760 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:15:18.760 #define SPDK_CONFIG_NVME_CUSE 1 00:15:18.760 #undef SPDK_CONFIG_OCF 00:15:18.760 #define SPDK_CONFIG_OCF_PATH 00:15:18.760 #define SPDK_CONFIG_OPENSSL_PATH 00:15:18.760 #undef SPDK_CONFIG_PGO_CAPTURE 00:15:18.760 #define SPDK_CONFIG_PGO_DIR 00:15:18.760 #undef SPDK_CONFIG_PGO_USE 00:15:18.760 #define SPDK_CONFIG_PREFIX /usr/local 00:15:18.760 #undef SPDK_CONFIG_RAID5F 00:15:18.760 #undef SPDK_CONFIG_RBD 00:15:18.760 #define SPDK_CONFIG_RDMA 1 00:15:18.760 #define SPDK_CONFIG_RDMA_PROV verbs 00:15:18.760 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:15:18.760 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:15:18.760 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:15:18.760 #define SPDK_CONFIG_SHARED 1 00:15:18.760 #undef SPDK_CONFIG_SMA 00:15:18.760 #define SPDK_CONFIG_TESTS 1 00:15:18.760 #undef SPDK_CONFIG_TSAN 00:15:18.760 #define SPDK_CONFIG_UBLK 1 00:15:18.760 #define SPDK_CONFIG_UBSAN 1 00:15:18.760 #undef SPDK_CONFIG_UNIT_TESTS 00:15:18.760 #undef SPDK_CONFIG_URING 00:15:18.760 #define SPDK_CONFIG_URING_PATH 00:15:18.760 #undef SPDK_CONFIG_URING_ZNS 00:15:18.760 #undef SPDK_CONFIG_USDT 00:15:18.760 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:15:18.760 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:15:18.760 #undef SPDK_CONFIG_VFIO_USER 00:15:18.760 #define SPDK_CONFIG_VFIO_USER_DIR 00:15:18.760 #define SPDK_CONFIG_VHOST 1 00:15:18.760 #define SPDK_CONFIG_VIRTIO 1 00:15:18.760 #undef SPDK_CONFIG_VTUNE 00:15:18.760 #define SPDK_CONFIG_VTUNE_DIR 00:15:18.760 #define SPDK_CONFIG_WERROR 1 00:15:18.760 #define SPDK_CONFIG_WPDK_DIR 00:15:18.760 #define SPDK_CONFIG_XNVME 1 00:15:18.760 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:15:18.760 21:46:26 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:15:18.760 21:46:26 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:18.760 21:46:26 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:15:18.760 21:46:26 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:18.760 21:46:26 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:18.760 21:46:26 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:18.760 21:46:26 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.760 21:46:26 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.760 21:46:26 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.760 21:46:26 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:15:18.761 21:46:26 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:15:18.761 21:46:26 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:15:18.761 21:46:26 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:15:18.761 21:46:26 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:15:18.761 21:46:26 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:15:18.761 21:46:26 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:15:18.761 21:46:26 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:15:18.761 21:46:26 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:15:18.761 21:46:26 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:15:18.761 21:46:26 nvme_xnvme -- pm/common@68 -- # uname -s 00:15:18.761 21:46:26 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:15:18.761 21:46:26 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:15:18.761 21:46:26 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:15:18.761 21:46:26 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:15:18.761 21:46:26 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:15:18.761 21:46:26 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:15:18.761 21:46:26 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:15:18.761 21:46:26 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:15:18.761 21:46:26 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:15:18.761 21:46:26 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:15:18.761 21:46:26 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:15:18.761 21:46:26 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:15:18.761 21:46:26 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:15:18.761 21:46:26 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:15:18.761 21:46:26 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 71394 ]] 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 71394 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.Rx4lhL 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.Rx4lhL/tests/xnvme /tmp/spdk.Rx4lhL 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13973061632 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5595250688 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261657600 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266421248 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:15:18.762 21:46:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13973061632 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5595250688 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266277888 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=94823903232 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4878876672 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:15:18.763 * Looking for test storage... 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13973061632 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:18.763 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:18.763 21:46:26 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:15:19.023 21:46:26 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:19.023 21:46:26 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:19.023 21:46:26 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:19.023 21:46:26 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:19.023 21:46:26 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:19.023 21:46:26 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:19.023 21:46:26 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:19.023 21:46:26 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:19.023 21:46:26 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:19.023 21:46:26 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:19.023 21:46:26 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:19.023 21:46:26 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:19.023 21:46:26 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:19.023 21:46:26 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:15:19.023 21:46:26 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:19.023 21:46:26 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:19.023 21:46:26 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:19.023 21:46:26 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:19.023 21:46:26 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:19.023 21:46:26 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:19.023 21:46:26 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:19.023 21:46:26 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:19.023 21:46:26 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:19.023 21:46:26 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:19.023 21:46:26 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:19.023 21:46:26 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:19.023 21:46:26 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:19.023 21:46:26 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:19.023 21:46:26 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:15:19.023 21:46:26 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:19.023 21:46:26 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:19.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.023 --rc genhtml_branch_coverage=1 00:15:19.023 --rc genhtml_function_coverage=1 00:15:19.023 --rc genhtml_legend=1 00:15:19.023 --rc geninfo_all_blocks=1 00:15:19.023 --rc geninfo_unexecuted_blocks=1 00:15:19.023 00:15:19.023 ' 00:15:19.023 21:46:26 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:19.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.023 --rc genhtml_branch_coverage=1 00:15:19.023 --rc genhtml_function_coverage=1 00:15:19.024 --rc genhtml_legend=1 00:15:19.024 --rc geninfo_all_blocks=1 00:15:19.024 --rc geninfo_unexecuted_blocks=1 00:15:19.024 00:15:19.024 ' 00:15:19.024 21:46:26 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:19.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.024 --rc genhtml_branch_coverage=1 00:15:19.024 --rc genhtml_function_coverage=1 00:15:19.024 --rc genhtml_legend=1 00:15:19.024 --rc geninfo_all_blocks=1 00:15:19.024 --rc geninfo_unexecuted_blocks=1 00:15:19.024 00:15:19.024 ' 00:15:19.024 21:46:26 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:19.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.024 --rc genhtml_branch_coverage=1 00:15:19.024 --rc genhtml_function_coverage=1 00:15:19.024 --rc genhtml_legend=1 00:15:19.024 --rc geninfo_all_blocks=1 00:15:19.024 --rc geninfo_unexecuted_blocks=1 00:15:19.024 00:15:19.024 ' 00:15:19.024 21:46:26 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:19.024 21:46:26 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:15:19.024 21:46:26 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:19.024 21:46:26 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:19.024 21:46:26 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:19.024 21:46:26 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.024 21:46:26 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.024 21:46:26 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.024 21:46:26 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:15:19.024 21:46:26 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.024 21:46:26 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:15:19.024 21:46:26 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:15:19.024 21:46:26 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:15:19.024 21:46:26 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:15:19.024 21:46:26 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:15:19.024 21:46:26 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:15:19.024 21:46:26 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:15:19.024 21:46:26 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:15:19.024 21:46:26 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:15:19.024 21:46:26 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:15:19.024 21:46:26 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:15:19.024 21:46:26 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:15:19.024 21:46:26 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:15:19.024 21:46:26 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:15:19.024 21:46:26 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:15:19.024 21:46:26 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:15:19.024 21:46:26 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:15:19.024 21:46:26 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:15:19.024 21:46:26 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:15:19.024 21:46:26 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:15:19.024 21:46:26 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:15:19.024 21:46:26 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:19.592 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:19.850 Waiting for block devices as requested 00:15:19.850 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:19.850 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:20.108 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:15:20.108 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:25.408 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:25.408 21:46:32 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:15:25.669 21:46:33 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:15:25.669 21:46:33 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:15:25.928 21:46:33 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:15:25.928 21:46:33 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:15:25.928 21:46:33 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:15:25.928 21:46:33 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:15:25.928 21:46:33 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:15:25.928 No valid GPT data, bailing 00:15:25.928 21:46:33 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:15:25.928 21:46:33 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:15:25.928 21:46:33 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:15:25.928 21:46:33 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:15:25.928 21:46:33 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:15:25.928 21:46:33 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:15:25.928 21:46:33 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:15:25.928 21:46:33 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:15:25.928 21:46:33 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:15:25.928 21:46:33 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:15:25.928 21:46:33 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:15:25.928 21:46:33 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:15:25.928 21:46:33 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:15:25.928 21:46:33 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:25.928 21:46:33 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:15:25.928 21:46:33 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:15:25.928 21:46:33 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:25.928 21:46:33 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:25.928 21:46:33 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:25.928 21:46:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:25.928 ************************************ 00:15:25.928 START TEST xnvme_rpc 00:15:25.928 ************************************ 00:15:25.928 21:46:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:25.928 21:46:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:25.928 21:46:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:25.928 21:46:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:25.928 21:46:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:25.928 21:46:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71789 00:15:25.928 21:46:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71789 00:15:25.928 21:46:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:25.928 21:46:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71789 ']' 00:15:25.929 21:46:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.929 21:46:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:25.929 21:46:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.929 21:46:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:25.929 21:46:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.929 [2024-12-10 21:46:33.640022] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:15:25.929 [2024-12-10 21:46:33.640350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71789 ] 00:15:26.188 [2024-12-10 21:46:33.824041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.447 [2024-12-10 21:46:33.950077] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.382 21:46:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:27.382 21:46:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:27.382 21:46:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:15:27.382 21:46:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.382 21:46:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:27.382 xnvme_bdev 00:15:27.382 21:46:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.382 21:46:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:27.382 21:46:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:27.382 21:46:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.382 21:46:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:27.382 21:46:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:27.382 21:46:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.382 21:46:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:27.382 21:46:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:27.382 21:46:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:27.382 21:46:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:27.382 21:46:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.382 21:46:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:27.382 21:46:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.382 21:46:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:15:27.382 21:46:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:27.382 21:46:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:27.382 21:46:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.382 21:46:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:27.382 21:46:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:27.382 21:46:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.382 21:46:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:15:27.382 21:46:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:27.382 21:46:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:27.382 21:46:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.382 21:46:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:27.382 21:46:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:27.382 21:46:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.382 21:46:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:15:27.382 21:46:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:27.382 21:46:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.382 21:46:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:27.382 21:46:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.382 21:46:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71789 00:15:27.382 21:46:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71789 ']' 00:15:27.382 21:46:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71789 00:15:27.382 21:46:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:27.382 21:46:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:27.382 21:46:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71789 00:15:27.382 21:46:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:27.382 21:46:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:27.382 killing process with pid 71789 00:15:27.382 21:46:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71789' 00:15:27.382 21:46:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71789 00:15:27.382 21:46:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71789 00:15:29.917 00:15:29.917 real 0m4.015s 00:15:29.917 user 0m3.991s 00:15:29.917 sys 0m0.583s 00:15:29.917 21:46:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:29.917 21:46:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.917 ************************************ 00:15:29.917 END TEST xnvme_rpc 00:15:29.917 ************************************ 00:15:29.917 21:46:37 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:29.917 21:46:37 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:29.917 21:46:37 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:29.917 21:46:37 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:29.917 ************************************ 00:15:29.917 START TEST xnvme_bdevperf 00:15:29.917 ************************************ 00:15:29.917 21:46:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:29.917 21:46:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:29.917 21:46:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:15:29.917 21:46:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:29.917 21:46:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:29.917 21:46:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:29.917 21:46:37 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:29.917 21:46:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:30.177 { 00:15:30.177 "subsystems": [ 00:15:30.177 { 00:15:30.177 "subsystem": "bdev", 00:15:30.177 "config": [ 00:15:30.177 { 00:15:30.177 "params": { 00:15:30.177 "io_mechanism": "libaio", 00:15:30.177 "conserve_cpu": false, 00:15:30.177 "filename": "/dev/nvme0n1", 00:15:30.177 "name": "xnvme_bdev" 00:15:30.177 }, 00:15:30.177 "method": "bdev_xnvme_create" 00:15:30.177 }, 00:15:30.177 { 00:15:30.177 "method": "bdev_wait_for_examine" 00:15:30.177 } 00:15:30.177 ] 00:15:30.177 } 00:15:30.177 ] 00:15:30.177 } 00:15:30.177 [2024-12-10 21:46:37.701409] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:15:30.177 [2024-12-10 21:46:37.701546] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71873 ] 00:15:30.177 [2024-12-10 21:46:37.885165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.436 [2024-12-10 21:46:38.029928] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.005 Running I/O for 5 seconds... 00:15:32.926 40223.00 IOPS, 157.12 MiB/s [2024-12-10T21:46:41.595Z] 39808.50 IOPS, 155.50 MiB/s [2024-12-10T21:46:42.530Z] 39977.33 IOPS, 156.16 MiB/s [2024-12-10T21:46:43.464Z] 40115.75 IOPS, 156.70 MiB/s 00:15:35.733 Latency(us) 00:15:35.733 [2024-12-10T21:46:43.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:35.733 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:35.733 xnvme_bdev : 5.00 40334.44 157.56 0.00 0.00 1583.04 157.92 3684.76 00:15:35.733 [2024-12-10T21:46:43.464Z] =================================================================================================================== 00:15:35.733 [2024-12-10T21:46:43.464Z] Total : 40334.44 157.56 0.00 0.00 1583.04 157.92 3684.76 00:15:37.112 21:46:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:37.112 21:46:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:37.112 21:46:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:37.112 21:46:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:37.112 21:46:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:37.112 { 00:15:37.112 "subsystems": [ 00:15:37.112 { 00:15:37.112 "subsystem": "bdev", 00:15:37.112 "config": [ 00:15:37.112 { 00:15:37.112 "params": { 00:15:37.112 "io_mechanism": "libaio", 00:15:37.112 "conserve_cpu": false, 00:15:37.112 "filename": "/dev/nvme0n1", 00:15:37.112 "name": "xnvme_bdev" 00:15:37.112 }, 00:15:37.112 "method": "bdev_xnvme_create" 00:15:37.112 }, 00:15:37.112 { 00:15:37.112 "method": "bdev_wait_for_examine" 00:15:37.112 } 00:15:37.112 ] 00:15:37.112 } 00:15:37.112 ] 00:15:37.112 } 00:15:37.112 [2024-12-10 21:46:44.715317] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:15:37.112 [2024-12-10 21:46:44.715465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71955 ] 00:15:37.371 [2024-12-10 21:46:44.899850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.371 [2024-12-10 21:46:45.036234] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.953 Running I/O for 5 seconds... 00:15:39.827 37545.00 IOPS, 146.66 MiB/s [2024-12-10T21:46:48.495Z] 40303.00 IOPS, 157.43 MiB/s [2024-12-10T21:46:49.871Z] 42556.00 IOPS, 166.23 MiB/s [2024-12-10T21:46:50.806Z] 43459.00 IOPS, 169.76 MiB/s 00:15:43.075 Latency(us) 00:15:43.075 [2024-12-10T21:46:50.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:43.075 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:43.075 xnvme_bdev : 5.00 43674.89 170.61 0.00 0.00 1461.70 156.27 3737.39 00:15:43.075 [2024-12-10T21:46:50.806Z] =================================================================================================================== 00:15:43.075 [2024-12-10T21:46:50.806Z] Total : 43674.89 170.61 0.00 0.00 1461.70 156.27 3737.39 00:15:44.012 00:15:44.012 real 0m14.032s 00:15:44.012 user 0m5.201s 00:15:44.012 sys 0m5.998s 00:15:44.012 21:46:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:44.012 21:46:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:44.012 ************************************ 00:15:44.012 END TEST xnvme_bdevperf 00:15:44.012 ************************************ 00:15:44.012 21:46:51 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:44.012 21:46:51 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:44.012 21:46:51 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:44.012 21:46:51 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:44.012 ************************************ 00:15:44.012 START TEST xnvme_fio_plugin 00:15:44.012 ************************************ 00:15:44.012 21:46:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:44.012 21:46:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:44.012 21:46:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:15:44.012 21:46:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:44.012 21:46:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:44.012 21:46:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:44.012 21:46:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:44.012 21:46:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:44.012 21:46:51 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:44.012 21:46:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:44.012 21:46:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:44.012 21:46:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:44.012 21:46:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:44.012 21:46:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:44.012 21:46:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:44.012 21:46:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:44.012 21:46:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:44.012 21:46:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:44.012 21:46:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:44.272 21:46:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:44.272 21:46:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:44.272 21:46:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:44.272 { 00:15:44.272 "subsystems": [ 00:15:44.272 { 00:15:44.272 "subsystem": "bdev", 00:15:44.272 "config": [ 00:15:44.272 { 00:15:44.272 "params": { 00:15:44.272 "io_mechanism": "libaio", 00:15:44.272 "conserve_cpu": false, 00:15:44.272 "filename": "/dev/nvme0n1", 00:15:44.272 "name": "xnvme_bdev" 00:15:44.272 }, 00:15:44.272 "method": "bdev_xnvme_create" 00:15:44.272 }, 00:15:44.272 { 00:15:44.272 "method": "bdev_wait_for_examine" 00:15:44.272 } 00:15:44.272 ] 00:15:44.272 } 00:15:44.272 ] 00:15:44.272 } 00:15:44.272 21:46:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:44.272 21:46:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:44.272 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:44.272 fio-3.35 00:15:44.272 Starting 1 thread 00:15:50.870 00:15:50.870 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72074: Tue Dec 10 21:46:57 2024 00:15:50.870 read: IOPS=47.2k, BW=184MiB/s (193MB/s)(921MiB/5001msec) 00:15:50.870 slat (usec): min=4, max=923, avg=18.75, stdev=20.42 00:15:50.870 clat (usec): min=45, max=6431, avg=791.20, stdev=494.38 00:15:50.870 lat (usec): min=97, max=6489, avg=809.95, stdev=498.07 00:15:50.870 clat percentiles (usec): 00:15:50.870 | 1.00th=[ 161], 5.00th=[ 233], 10.00th=[ 297], 20.00th=[ 408], 00:15:50.870 | 30.00th=[ 510], 40.00th=[ 611], 50.00th=[ 717], 60.00th=[ 824], 00:15:50.870 | 70.00th=[ 938], 80.00th=[ 1057], 90.00th=[ 1270], 95.00th=[ 1582], 00:15:50.870 | 99.00th=[ 2769], 99.50th=[ 3326], 99.90th=[ 4293], 99.95th=[ 4686], 00:15:50.870 | 99.99th=[ 5342] 00:15:50.870 bw ( KiB/s): min=162504, max=203064, per=100.00%, avg=191340.44, stdev=12480.95, samples=9 00:15:50.870 iops : min=40626, max=50766, avg=47835.11, stdev=3120.24, samples=9 00:15:50.870 lat (usec) : 50=0.01%, 100=0.03%, 250=6.27%, 500=22.61%, 750=24.01% 00:15:50.870 lat (usec) : 1000=22.47% 00:15:50.870 lat (msec) : 2=21.88%, 4=2.57%, 10=0.16% 00:15:50.870 cpu : usr=25.22%, sys=52.98%, ctx=70, majf=0, minf=764 00:15:50.870 IO depths : 1=0.1%, 2=1.1%, 4=4.1%, 8=11.0%, 16=26.4%, 32=55.6%, >=64=1.7% 00:15:50.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.870 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:15:50.870 issued rwts: total=235832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:50.870 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:50.870 00:15:50.870 Run status group 0 (all jobs): 00:15:50.870 READ: bw=184MiB/s (193MB/s), 184MiB/s-184MiB/s (193MB/s-193MB/s), io=921MiB (966MB), run=5001-5001msec 00:15:51.437 ----------------------------------------------------- 00:15:51.437 Suppressions used: 00:15:51.437 count bytes template 00:15:51.437 1 11 /usr/src/fio/parse.c 00:15:51.437 1 8 libtcmalloc_minimal.so 00:15:51.437 1 904 libcrypto.so 00:15:51.437 ----------------------------------------------------- 00:15:51.437 00:15:51.777 21:46:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:51.777 21:46:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:51.777 21:46:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:51.777 21:46:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:51.777 21:46:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:51.777 21:46:59 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:51.777 21:46:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:51.777 21:46:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:51.777 21:46:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:51.777 21:46:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:51.777 21:46:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:51.777 21:46:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:51.777 21:46:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:51.777 21:46:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:51.777 21:46:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:51.777 21:46:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:51.777 21:46:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:51.777 21:46:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:51.777 21:46:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:51.777 21:46:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:51.777 21:46:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:51.777 { 00:15:51.777 "subsystems": [ 00:15:51.777 { 00:15:51.777 "subsystem": "bdev", 00:15:51.777 "config": [ 00:15:51.777 { 00:15:51.777 "params": { 00:15:51.777 "io_mechanism": "libaio", 00:15:51.777 "conserve_cpu": false, 00:15:51.777 "filename": "/dev/nvme0n1", 00:15:51.777 "name": "xnvme_bdev" 00:15:51.777 }, 00:15:51.777 "method": "bdev_xnvme_create" 00:15:51.777 }, 00:15:51.777 { 00:15:51.777 "method": "bdev_wait_for_examine" 00:15:51.777 } 00:15:51.777 ] 00:15:51.777 } 00:15:51.777 ] 00:15:51.777 } 00:15:51.777 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:51.777 fio-3.35 00:15:51.777 Starting 1 thread 00:15:58.352 00:15:58.352 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72177: Tue Dec 10 21:47:05 2024 00:15:58.352 write: IOPS=44.5k, BW=174MiB/s (182MB/s)(868MiB/5001msec); 0 zone resets 00:15:58.352 slat (usec): min=4, max=960, avg=19.70, stdev=24.74 00:15:58.352 clat (usec): min=58, max=11395, avg=850.02, stdev=571.80 00:15:58.352 lat (usec): min=134, max=11408, avg=869.72, stdev=576.23 00:15:58.352 clat percentiles (usec): 00:15:58.352 | 1.00th=[ 182], 5.00th=[ 260], 10.00th=[ 326], 20.00th=[ 445], 00:15:58.352 | 30.00th=[ 553], 40.00th=[ 660], 50.00th=[ 766], 60.00th=[ 865], 00:15:58.352 | 70.00th=[ 979], 80.00th=[ 1106], 90.00th=[ 1336], 95.00th=[ 1713], 00:15:58.352 | 99.00th=[ 3261], 99.50th=[ 3916], 99.90th=[ 5080], 99.95th=[ 5473], 00:15:58.352 | 99.99th=[ 9896] 00:15:58.352 bw ( KiB/s): min=141768, max=212360, per=100.00%, avg=178651.56, stdev=19199.58, samples=9 00:15:58.352 iops : min=35442, max=53090, avg=44662.89, stdev=4799.90, samples=9 00:15:58.352 lat (usec) : 100=0.03%, 250=4.39%, 500=20.46%, 750=23.89%, 1000=23.32% 00:15:58.352 lat (msec) : 2=24.44%, 4=3.02%, 10=0.44%, 20=0.01% 00:15:58.352 cpu : usr=25.12%, sys=53.76%, ctx=67, majf=0, minf=765 00:15:58.352 IO depths : 1=0.1%, 2=0.9%, 4=3.9%, 8=10.9%, 16=25.9%, 32=56.4%, >=64=1.8% 00:15:58.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.352 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:15:58.352 issued rwts: total=0,222324,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:58.352 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:58.352 00:15:58.352 Run status group 0 (all jobs): 00:15:58.352 WRITE: bw=174MiB/s (182MB/s), 174MiB/s-174MiB/s (182MB/s-182MB/s), io=868MiB (911MB), run=5001-5001msec 00:15:58.921 ----------------------------------------------------- 00:15:58.921 Suppressions used: 00:15:58.921 count bytes template 00:15:58.921 1 11 /usr/src/fio/parse.c 00:15:58.921 1 8 libtcmalloc_minimal.so 00:15:58.921 1 904 libcrypto.so 00:15:58.921 ----------------------------------------------------- 00:15:58.921 00:15:59.181 00:15:59.181 real 0m14.985s 00:15:59.181 user 0m6.377s 00:15:59.181 sys 0m6.137s 00:15:59.181 ************************************ 00:15:59.181 END TEST xnvme_fio_plugin 00:15:59.181 ************************************ 00:15:59.181 21:47:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:59.181 21:47:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:59.181 21:47:06 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:59.181 21:47:06 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:15:59.181 21:47:06 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:15:59.181 21:47:06 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:59.181 21:47:06 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:59.181 21:47:06 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:59.181 21:47:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:59.181 ************************************ 00:15:59.181 START TEST xnvme_rpc 00:15:59.181 ************************************ 00:15:59.181 21:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:59.181 21:47:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:59.181 21:47:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:59.181 21:47:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:59.181 21:47:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:59.181 21:47:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72263 00:15:59.181 21:47:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72263 00:15:59.181 21:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72263 ']' 00:15:59.181 21:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:59.181 21:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:59.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:59.181 21:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:59.181 21:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:59.181 21:47:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.181 21:47:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:59.181 [2024-12-10 21:47:06.867826] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:15:59.181 [2024-12-10 21:47:06.867997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72263 ] 00:15:59.440 [2024-12-10 21:47:07.046707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.700 [2024-12-10 21:47:07.178663] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:00.639 xnvme_bdev 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72263 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72263 ']' 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72263 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72263 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:00.639 killing process with pid 72263 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72263' 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72263 00:16:00.639 21:47:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72263 00:16:03.203 00:16:03.203 real 0m4.024s 00:16:03.203 user 0m4.059s 00:16:03.203 sys 0m0.572s 00:16:03.203 21:47:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:03.203 21:47:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:03.203 ************************************ 00:16:03.203 END TEST xnvme_rpc 00:16:03.203 ************************************ 00:16:03.203 21:47:10 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:03.203 21:47:10 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:03.203 21:47:10 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:03.203 21:47:10 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:03.203 ************************************ 00:16:03.203 START TEST xnvme_bdevperf 00:16:03.203 ************************************ 00:16:03.203 21:47:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:03.203 21:47:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:16:03.203 21:47:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:16:03.203 21:47:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:03.203 21:47:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:16:03.203 21:47:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:03.203 21:47:10 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:03.203 21:47:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:03.203 { 00:16:03.203 "subsystems": [ 00:16:03.203 { 00:16:03.204 "subsystem": "bdev", 00:16:03.204 "config": [ 00:16:03.204 { 00:16:03.204 "params": { 00:16:03.204 "io_mechanism": "libaio", 00:16:03.204 "conserve_cpu": true, 00:16:03.204 "filename": "/dev/nvme0n1", 00:16:03.204 "name": "xnvme_bdev" 00:16:03.204 }, 00:16:03.204 "method": "bdev_xnvme_create" 00:16:03.204 }, 00:16:03.204 { 00:16:03.204 "method": "bdev_wait_for_examine" 00:16:03.204 } 00:16:03.204 ] 00:16:03.204 } 00:16:03.204 ] 00:16:03.204 } 00:16:03.204 [2024-12-10 21:47:10.924750] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:16:03.204 [2024-12-10 21:47:10.924902] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72343 ] 00:16:03.461 [2024-12-10 21:47:11.109640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.719 [2024-12-10 21:47:11.244261] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.977 Running I/O for 5 seconds... 00:16:06.291 37502.00 IOPS, 146.49 MiB/s [2024-12-10T21:47:14.959Z] 36742.50 IOPS, 143.53 MiB/s [2024-12-10T21:47:15.896Z] 37691.00 IOPS, 147.23 MiB/s [2024-12-10T21:47:16.834Z] 38584.25 IOPS, 150.72 MiB/s [2024-12-10T21:47:16.834Z] 38884.40 IOPS, 151.89 MiB/s 00:16:09.103 Latency(us) 00:16:09.103 [2024-12-10T21:47:16.834Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:09.103 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:09.103 xnvme_bdev : 5.00 38865.69 151.82 0.00 0.00 1642.91 411.24 6711.52 00:16:09.103 [2024-12-10T21:47:16.834Z] =================================================================================================================== 00:16:09.103 [2024-12-10T21:47:16.834Z] Total : 38865.69 151.82 0.00 0.00 1642.91 411.24 6711.52 00:16:10.482 21:47:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:10.482 21:47:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:10.482 21:47:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:10.482 21:47:17 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:10.482 21:47:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:10.482 { 00:16:10.482 "subsystems": [ 00:16:10.482 { 00:16:10.482 "subsystem": "bdev", 00:16:10.482 "config": [ 00:16:10.482 { 00:16:10.482 "params": { 00:16:10.482 "io_mechanism": "libaio", 00:16:10.482 "conserve_cpu": true, 00:16:10.482 "filename": "/dev/nvme0n1", 00:16:10.482 "name": "xnvme_bdev" 00:16:10.482 }, 00:16:10.482 "method": "bdev_xnvme_create" 00:16:10.482 }, 00:16:10.482 { 00:16:10.482 "method": "bdev_wait_for_examine" 00:16:10.482 } 00:16:10.482 ] 00:16:10.482 } 00:16:10.482 ] 00:16:10.482 } 00:16:10.482 [2024-12-10 21:47:17.910406] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:16:10.482 [2024-12-10 21:47:17.911180] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72424 ] 00:16:10.482 [2024-12-10 21:47:18.096838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.743 [2024-12-10 21:47:18.227391] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.002 Running I/O for 5 seconds... 00:16:12.876 39412.00 IOPS, 153.95 MiB/s [2024-12-10T21:47:21.984Z] 40109.50 IOPS, 156.68 MiB/s [2024-12-10T21:47:22.922Z] 40398.00 IOPS, 157.80 MiB/s [2024-12-10T21:47:23.859Z] 40433.00 IOPS, 157.94 MiB/s [2024-12-10T21:47:23.859Z] 40477.60 IOPS, 158.12 MiB/s 00:16:16.128 Latency(us) 00:16:16.128 [2024-12-10T21:47:23.859Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.128 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:16:16.128 xnvme_bdev : 5.00 40451.54 158.01 0.00 0.00 1578.25 60.45 37058.11 00:16:16.128 [2024-12-10T21:47:23.859Z] =================================================================================================================== 00:16:16.128 [2024-12-10T21:47:23.859Z] Total : 40451.54 158.01 0.00 0.00 1578.25 60.45 37058.11 00:16:17.507 00:16:17.507 real 0m13.961s 00:16:17.507 user 0m5.291s 00:16:17.507 sys 0m5.976s 00:16:17.507 21:47:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:17.507 21:47:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:17.507 ************************************ 00:16:17.507 END TEST xnvme_bdevperf 00:16:17.507 ************************************ 00:16:17.507 21:47:24 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:16:17.507 21:47:24 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:17.507 21:47:24 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:17.507 21:47:24 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:17.507 ************************************ 00:16:17.507 START TEST xnvme_fio_plugin 00:16:17.507 ************************************ 00:16:17.507 21:47:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:16:17.507 21:47:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:16:17.507 21:47:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:16:17.507 21:47:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:17.507 21:47:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:17.507 21:47:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:17.507 21:47:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:17.507 21:47:24 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:17.507 21:47:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:17.507 21:47:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:17.507 21:47:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:17.507 21:47:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:17.507 21:47:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:17.507 21:47:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:17.507 21:47:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:17.507 21:47:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:17.507 21:47:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:17.507 21:47:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:17.507 21:47:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:17.507 { 00:16:17.507 "subsystems": [ 00:16:17.507 { 00:16:17.507 "subsystem": "bdev", 00:16:17.507 "config": [ 00:16:17.507 { 00:16:17.507 "params": { 00:16:17.507 "io_mechanism": "libaio", 00:16:17.507 "conserve_cpu": true, 00:16:17.507 "filename": "/dev/nvme0n1", 00:16:17.507 "name": "xnvme_bdev" 00:16:17.507 }, 00:16:17.507 "method": "bdev_xnvme_create" 00:16:17.507 }, 00:16:17.507 { 00:16:17.507 "method": "bdev_wait_for_examine" 00:16:17.507 } 00:16:17.507 ] 00:16:17.507 } 00:16:17.507 ] 00:16:17.507 } 00:16:17.507 21:47:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:17.507 21:47:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:17.507 21:47:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:17.507 21:47:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:17.507 21:47:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:17.507 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:17.507 fio-3.35 00:16:17.507 Starting 1 thread 00:16:24.072 00:16:24.073 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72553: Tue Dec 10 21:47:30 2024 00:16:24.073 read: IOPS=46.8k, BW=183MiB/s (192MB/s)(915MiB/5001msec) 00:16:24.073 slat (usec): min=4, max=633, avg=18.32, stdev=26.73 00:16:24.073 clat (usec): min=91, max=6267, avg=832.57, stdev=501.64 00:16:24.073 lat (usec): min=141, max=6357, avg=850.89, stdev=505.27 00:16:24.073 clat percentiles (usec): 00:16:24.073 | 1.00th=[ 186], 5.00th=[ 273], 10.00th=[ 347], 20.00th=[ 474], 00:16:24.073 | 30.00th=[ 578], 40.00th=[ 676], 50.00th=[ 766], 60.00th=[ 857], 00:16:24.073 | 70.00th=[ 955], 80.00th=[ 1074], 90.00th=[ 1270], 95.00th=[ 1532], 00:16:24.073 | 99.00th=[ 3064], 99.50th=[ 3687], 99.90th=[ 4555], 99.95th=[ 4817], 00:16:24.073 | 99.99th=[ 5342] 00:16:24.073 bw ( KiB/s): min=172544, max=204424, per=100.00%, avg=189498.67, stdev=11533.41, samples=9 00:16:24.073 iops : min=43136, max=51106, avg=47374.67, stdev=2883.35, samples=9 00:16:24.073 lat (usec) : 100=0.02%, 250=3.78%, 500=18.66%, 750=25.53%, 1000=26.30% 00:16:24.073 lat (msec) : 2=22.92%, 4=2.47%, 10=0.32% 00:16:24.073 cpu : usr=27.84%, sys=52.94%, ctx=60, majf=0, minf=764 00:16:24.073 IO depths : 1=0.1%, 2=0.9%, 4=3.5%, 8=10.1%, 16=25.2%, 32=58.2%, >=64=1.9% 00:16:24.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.073 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:16:24.073 issued rwts: total=234150,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.073 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:24.073 00:16:24.073 Run status group 0 (all jobs): 00:16:24.073 READ: bw=183MiB/s (192MB/s), 183MiB/s-183MiB/s (192MB/s-192MB/s), io=915MiB (959MB), run=5001-5001msec 00:16:24.640 ----------------------------------------------------- 00:16:24.640 Suppressions used: 00:16:24.640 count bytes template 00:16:24.640 1 11 /usr/src/fio/parse.c 00:16:24.640 1 8 libtcmalloc_minimal.so 00:16:24.640 1 904 libcrypto.so 00:16:24.640 ----------------------------------------------------- 00:16:24.640 00:16:24.899 21:47:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:24.899 21:47:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:24.899 21:47:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:24.899 21:47:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:24.899 21:47:32 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:24.899 21:47:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:24.899 21:47:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:24.899 21:47:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:24.899 21:47:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:24.899 21:47:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:24.899 21:47:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:24.899 21:47:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:24.899 21:47:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:24.899 21:47:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:24.899 21:47:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:24.899 21:47:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:24.899 21:47:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:24.899 21:47:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:24.899 21:47:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:24.899 21:47:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:24.899 21:47:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:24.899 { 00:16:24.899 "subsystems": [ 00:16:24.899 { 00:16:24.899 "subsystem": "bdev", 00:16:24.899 "config": [ 00:16:24.899 { 00:16:24.899 "params": { 00:16:24.899 "io_mechanism": "libaio", 00:16:24.899 "conserve_cpu": true, 00:16:24.899 "filename": "/dev/nvme0n1", 00:16:24.899 "name": "xnvme_bdev" 00:16:24.899 }, 00:16:24.899 "method": "bdev_xnvme_create" 00:16:24.899 }, 00:16:24.899 { 00:16:24.899 "method": "bdev_wait_for_examine" 00:16:24.899 } 00:16:24.899 ] 00:16:24.899 } 00:16:24.899 ] 00:16:24.899 } 00:16:24.899 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:24.899 fio-3.35 00:16:24.899 Starting 1 thread 00:16:31.462 00:16:31.462 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72646: Tue Dec 10 21:47:38 2024 00:16:31.462 write: IOPS=43.5k, BW=170MiB/s (178MB/s)(850MiB/5001msec); 0 zone resets 00:16:31.462 slat (usec): min=4, max=575, avg=20.18, stdev=25.31 00:16:31.462 clat (usec): min=82, max=6055, avg=864.30, stdev=550.40 00:16:31.462 lat (usec): min=127, max=6153, avg=884.47, stdev=555.29 00:16:31.462 clat percentiles (usec): 00:16:31.462 | 1.00th=[ 188], 5.00th=[ 265], 10.00th=[ 338], 20.00th=[ 461], 00:16:31.462 | 30.00th=[ 570], 40.00th=[ 676], 50.00th=[ 775], 60.00th=[ 881], 00:16:31.462 | 70.00th=[ 996], 80.00th=[ 1139], 90.00th=[ 1369], 95.00th=[ 1729], 00:16:31.462 | 99.00th=[ 3261], 99.50th=[ 3818], 99.90th=[ 4555], 99.95th=[ 4752], 00:16:31.462 | 99.99th=[ 5211] 00:16:31.462 bw ( KiB/s): min=146536, max=188360, per=99.80%, avg=173708.44, stdev=12712.73, samples=9 00:16:31.462 iops : min=36632, max=47090, avg=43426.89, stdev=3178.72, samples=9 00:16:31.462 lat (usec) : 100=0.02%, 250=4.08%, 500=19.48%, 750=23.85%, 1000=22.85% 00:16:31.462 lat (msec) : 2=26.05%, 4=3.29%, 10=0.38% 00:16:31.462 cpu : usr=24.50%, sys=55.26%, ctx=55, majf=0, minf=765 00:16:31.462 IO depths : 1=0.1%, 2=0.9%, 4=4.0%, 8=10.9%, 16=26.0%, 32=56.4%, >=64=1.8% 00:16:31.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:31.462 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:16:31.462 issued rwts: total=0,217605,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:31.462 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:31.462 00:16:31.462 Run status group 0 (all jobs): 00:16:31.462 WRITE: bw=170MiB/s (178MB/s), 170MiB/s-170MiB/s (178MB/s-178MB/s), io=850MiB (891MB), run=5001-5001msec 00:16:32.397 ----------------------------------------------------- 00:16:32.397 Suppressions used: 00:16:32.397 count bytes template 00:16:32.397 1 11 /usr/src/fio/parse.c 00:16:32.397 1 8 libtcmalloc_minimal.so 00:16:32.397 1 904 libcrypto.so 00:16:32.397 ----------------------------------------------------- 00:16:32.397 00:16:32.397 00:16:32.397 real 0m14.984s 00:16:32.397 user 0m6.463s 00:16:32.397 sys 0m6.233s 00:16:32.397 21:47:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:32.397 ************************************ 00:16:32.397 END TEST xnvme_fio_plugin 00:16:32.397 ************************************ 00:16:32.397 21:47:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:32.397 21:47:39 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:16:32.397 21:47:39 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:16:32.397 21:47:39 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:16:32.397 21:47:39 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:16:32.397 21:47:39 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:16:32.397 21:47:39 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:32.397 21:47:39 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:16:32.397 21:47:39 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:16:32.397 21:47:39 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:32.397 21:47:39 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:32.397 21:47:39 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:32.397 21:47:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:32.397 ************************************ 00:16:32.397 START TEST xnvme_rpc 00:16:32.397 ************************************ 00:16:32.397 21:47:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:32.397 21:47:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:32.397 21:47:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:32.398 21:47:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:32.398 21:47:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:32.398 21:47:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72732 00:16:32.398 21:47:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:32.398 21:47:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72732 00:16:32.398 21:47:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72732 ']' 00:16:32.398 21:47:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.398 21:47:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:32.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.398 21:47:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.398 21:47:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:32.398 21:47:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.398 [2024-12-10 21:47:40.037377] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:16:32.398 [2024-12-10 21:47:40.037527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72732 ] 00:16:32.656 [2024-12-10 21:47:40.218696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.656 [2024-12-10 21:47:40.340592] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.605 21:47:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:33.605 21:47:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:33.605 21:47:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:16:33.605 21:47:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.605 21:47:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.605 xnvme_bdev 00:16:33.605 21:47:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.605 21:47:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:33.605 21:47:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:33.605 21:47:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.605 21:47:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.605 21:47:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:33.605 21:47:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.605 21:47:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:33.605 21:47:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:33.605 21:47:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:33.605 21:47:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.605 21:47:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.605 21:47:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:33.605 21:47:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.605 21:47:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:16:33.605 21:47:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:33.605 21:47:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:33.605 21:47:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:33.605 21:47:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.605 21:47:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.865 21:47:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.865 21:47:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:16:33.865 21:47:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:33.865 21:47:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:33.865 21:47:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.865 21:47:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.865 21:47:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:33.865 21:47:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.865 21:47:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:16:33.865 21:47:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:33.865 21:47:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.865 21:47:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.865 21:47:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.865 21:47:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72732 00:16:33.865 21:47:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72732 ']' 00:16:33.865 21:47:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72732 00:16:33.865 21:47:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:33.865 21:47:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:33.865 21:47:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72732 00:16:33.865 21:47:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:33.865 21:47:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:33.865 killing process with pid 72732 00:16:33.865 21:47:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72732' 00:16:33.865 21:47:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72732 00:16:33.865 21:47:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72732 00:16:36.400 00:16:36.400 real 0m3.972s 00:16:36.400 user 0m4.004s 00:16:36.400 sys 0m0.577s 00:16:36.400 21:47:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:36.400 ************************************ 00:16:36.400 END TEST xnvme_rpc 00:16:36.400 ************************************ 00:16:36.400 21:47:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.400 21:47:43 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:36.400 21:47:43 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:36.400 21:47:43 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:36.400 21:47:43 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:36.400 ************************************ 00:16:36.400 START TEST xnvme_bdevperf 00:16:36.400 ************************************ 00:16:36.400 21:47:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:36.400 21:47:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:16:36.400 21:47:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:16:36.400 21:47:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:36.400 21:47:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:16:36.400 21:47:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:36.400 21:47:43 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:36.400 21:47:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:36.400 { 00:16:36.400 "subsystems": [ 00:16:36.400 { 00:16:36.400 "subsystem": "bdev", 00:16:36.400 "config": [ 00:16:36.400 { 00:16:36.400 "params": { 00:16:36.400 "io_mechanism": "io_uring", 00:16:36.400 "conserve_cpu": false, 00:16:36.400 "filename": "/dev/nvme0n1", 00:16:36.400 "name": "xnvme_bdev" 00:16:36.400 }, 00:16:36.400 "method": "bdev_xnvme_create" 00:16:36.400 }, 00:16:36.400 { 00:16:36.400 "method": "bdev_wait_for_examine" 00:16:36.400 } 00:16:36.400 ] 00:16:36.400 } 00:16:36.400 ] 00:16:36.400 } 00:16:36.400 [2024-12-10 21:47:44.058871] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:16:36.400 [2024-12-10 21:47:44.058993] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72817 ] 00:16:36.659 [2024-12-10 21:47:44.238757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.659 [2024-12-10 21:47:44.364030] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.227 Running I/O for 5 seconds... 00:16:39.104 27008.00 IOPS, 105.50 MiB/s [2024-12-10T21:47:47.796Z] 29920.00 IOPS, 116.88 MiB/s [2024-12-10T21:47:48.733Z] 29397.33 IOPS, 114.83 MiB/s [2024-12-10T21:47:50.108Z] 29856.00 IOPS, 116.62 MiB/s [2024-12-10T21:47:50.108Z] 30886.40 IOPS, 120.65 MiB/s 00:16:42.377 Latency(us) 00:16:42.377 [2024-12-10T21:47:50.108Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.377 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:42.377 xnvme_bdev : 5.00 30869.38 120.58 0.00 0.00 2067.68 1296.24 5263.94 00:16:42.377 [2024-12-10T21:47:50.108Z] =================================================================================================================== 00:16:42.377 [2024-12-10T21:47:50.108Z] Total : 30869.38 120.58 0.00 0.00 2067.68 1296.24 5263.94 00:16:43.314 21:47:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:43.314 21:47:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:43.314 21:47:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:43.314 21:47:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:43.314 21:47:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:43.314 { 00:16:43.314 "subsystems": [ 00:16:43.314 { 00:16:43.314 "subsystem": "bdev", 00:16:43.314 "config": [ 00:16:43.314 { 00:16:43.314 "params": { 00:16:43.314 "io_mechanism": "io_uring", 00:16:43.314 "conserve_cpu": false, 00:16:43.314 "filename": "/dev/nvme0n1", 00:16:43.314 "name": "xnvme_bdev" 00:16:43.314 }, 00:16:43.314 "method": "bdev_xnvme_create" 00:16:43.314 }, 00:16:43.314 { 00:16:43.314 "method": "bdev_wait_for_examine" 00:16:43.314 } 00:16:43.314 ] 00:16:43.314 } 00:16:43.314 ] 00:16:43.314 } 00:16:43.314 [2024-12-10 21:47:50.993303] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:16:43.314 [2024-12-10 21:47:50.993438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72898 ] 00:16:43.573 [2024-12-10 21:47:51.175931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.832 [2024-12-10 21:47:51.309846] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.090 Running I/O for 5 seconds... 00:16:45.973 31360.00 IOPS, 122.50 MiB/s [2024-12-10T21:47:55.078Z] 30944.00 IOPS, 120.88 MiB/s [2024-12-10T21:47:56.035Z] 31338.67 IOPS, 122.42 MiB/s [2024-12-10T21:47:56.971Z] 31776.00 IOPS, 124.12 MiB/s 00:16:49.240 Latency(us) 00:16:49.240 [2024-12-10T21:47:56.971Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:49.240 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:16:49.240 xnvme_bdev : 5.00 31365.05 122.52 0.00 0.00 2034.70 1460.74 5263.94 00:16:49.240 [2024-12-10T21:47:56.971Z] =================================================================================================================== 00:16:49.240 [2024-12-10T21:47:56.971Z] Total : 31365.05 122.52 0.00 0.00 2034.70 1460.74 5263.94 00:16:50.175 00:16:50.175 real 0m13.890s 00:16:50.175 user 0m6.458s 00:16:50.175 sys 0m7.222s 00:16:50.175 ************************************ 00:16:50.175 END TEST xnvme_bdevperf 00:16:50.175 ************************************ 00:16:50.175 21:47:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:50.175 21:47:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:50.434 21:47:57 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:16:50.434 21:47:57 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:50.434 21:47:57 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:50.434 21:47:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:50.434 ************************************ 00:16:50.434 START TEST xnvme_fio_plugin 00:16:50.434 ************************************ 00:16:50.434 21:47:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:16:50.434 21:47:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:16:50.434 21:47:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:16:50.434 21:47:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:50.434 21:47:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:50.434 21:47:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:50.434 21:47:57 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:50.434 21:47:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:50.434 21:47:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:50.434 21:47:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:50.434 21:47:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:50.434 21:47:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:50.434 21:47:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:50.434 21:47:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:50.434 21:47:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:50.434 21:47:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:50.434 21:47:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:50.434 21:47:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:50.434 21:47:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:50.434 21:47:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:50.434 21:47:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:50.434 21:47:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:50.434 21:47:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:50.434 21:47:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:50.434 { 00:16:50.434 "subsystems": [ 00:16:50.434 { 00:16:50.434 "subsystem": "bdev", 00:16:50.434 "config": [ 00:16:50.434 { 00:16:50.434 "params": { 00:16:50.434 "io_mechanism": "io_uring", 00:16:50.434 "conserve_cpu": false, 00:16:50.434 "filename": "/dev/nvme0n1", 00:16:50.434 "name": "xnvme_bdev" 00:16:50.434 }, 00:16:50.434 "method": "bdev_xnvme_create" 00:16:50.434 }, 00:16:50.434 { 00:16:50.434 "method": "bdev_wait_for_examine" 00:16:50.434 } 00:16:50.434 ] 00:16:50.434 } 00:16:50.434 ] 00:16:50.434 } 00:16:50.692 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:50.692 fio-3.35 00:16:50.692 Starting 1 thread 00:16:57.257 00:16:57.257 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73023: Tue Dec 10 21:48:03 2024 00:16:57.257 read: IOPS=31.6k, BW=124MiB/s (130MB/s)(618MiB/5001msec) 00:16:57.257 slat (nsec): min=4054, max=68189, avg=5186.76, stdev=1425.12 00:16:57.257 clat (usec): min=1326, max=3273, avg=1819.44, stdev=174.48 00:16:57.257 lat (usec): min=1331, max=3307, avg=1824.62, stdev=174.85 00:16:57.257 clat percentiles (usec): 00:16:57.257 | 1.00th=[ 1483], 5.00th=[ 1565], 10.00th=[ 1614], 20.00th=[ 1680], 00:16:57.257 | 30.00th=[ 1729], 40.00th=[ 1762], 50.00th=[ 1811], 60.00th=[ 1844], 00:16:57.257 | 70.00th=[ 1893], 80.00th=[ 1958], 90.00th=[ 2040], 95.00th=[ 2114], 00:16:57.257 | 99.00th=[ 2311], 99.50th=[ 2409], 99.90th=[ 2704], 99.95th=[ 2802], 00:16:57.257 | 99.99th=[ 3130] 00:16:57.257 bw ( KiB/s): min=116736, max=134144, per=99.31%, avg=125667.56, stdev=5449.32, samples=9 00:16:57.257 iops : min=29184, max=33536, avg=31416.89, stdev=1362.33, samples=9 00:16:57.257 lat (msec) : 2=86.55%, 4=13.45% 00:16:57.257 cpu : usr=31.00%, sys=68.10%, ctx=9, majf=0, minf=762 00:16:57.257 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:16:57.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.257 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:16:57.257 issued rwts: total=158208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:57.257 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:57.257 00:16:57.257 Run status group 0 (all jobs): 00:16:57.257 READ: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=618MiB (648MB), run=5001-5001msec 00:16:57.828 ----------------------------------------------------- 00:16:57.828 Suppressions used: 00:16:57.828 count bytes template 00:16:57.828 1 11 /usr/src/fio/parse.c 00:16:57.828 1 8 libtcmalloc_minimal.so 00:16:57.828 1 904 libcrypto.so 00:16:57.828 ----------------------------------------------------- 00:16:57.828 00:16:57.828 21:48:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:57.828 21:48:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:57.828 21:48:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:57.828 21:48:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:57.828 21:48:05 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:57.828 21:48:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:57.828 21:48:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:57.828 21:48:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:57.828 21:48:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:57.828 21:48:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:57.828 21:48:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:57.828 21:48:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:57.828 21:48:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:57.829 21:48:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:57.829 21:48:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:57.829 21:48:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:57.829 21:48:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:57.829 21:48:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:57.829 21:48:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:57.829 21:48:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:57.829 21:48:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:57.829 { 00:16:57.829 "subsystems": [ 00:16:57.829 { 00:16:57.829 "subsystem": "bdev", 00:16:57.829 "config": [ 00:16:57.829 { 00:16:57.829 "params": { 00:16:57.829 "io_mechanism": "io_uring", 00:16:57.829 "conserve_cpu": false, 00:16:57.829 "filename": "/dev/nvme0n1", 00:16:57.829 "name": "xnvme_bdev" 00:16:57.829 }, 00:16:57.829 "method": "bdev_xnvme_create" 00:16:57.829 }, 00:16:57.829 { 00:16:57.829 "method": "bdev_wait_for_examine" 00:16:57.829 } 00:16:57.829 ] 00:16:57.829 } 00:16:57.829 ] 00:16:57.829 } 00:16:58.087 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:58.087 fio-3.35 00:16:58.087 Starting 1 thread 00:17:04.678 00:17:04.678 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73119: Tue Dec 10 21:48:11 2024 00:17:04.678 write: IOPS=31.1k, BW=122MiB/s (127MB/s)(608MiB/5001msec); 0 zone resets 00:17:04.678 slat (usec): min=2, max=128, avg= 5.47, stdev= 1.90 00:17:04.678 clat (usec): min=127, max=4914, avg=1839.58, stdev=279.29 00:17:04.678 lat (usec): min=131, max=4922, avg=1845.05, stdev=279.93 00:17:04.678 clat percentiles (usec): 00:17:04.678 | 1.00th=[ 1369], 5.00th=[ 1467], 10.00th=[ 1532], 20.00th=[ 1614], 00:17:04.678 | 30.00th=[ 1680], 40.00th=[ 1729], 50.00th=[ 1795], 60.00th=[ 1860], 00:17:04.678 | 70.00th=[ 1942], 80.00th=[ 2057], 90.00th=[ 2212], 95.00th=[ 2343], 00:17:04.678 | 99.00th=[ 2573], 99.50th=[ 2704], 99.90th=[ 3458], 99.95th=[ 4113], 00:17:04.678 | 99.99th=[ 4817] 00:17:04.678 bw ( KiB/s): min=106581, max=139264, per=99.58%, avg=123970.33, stdev=10445.28, samples=9 00:17:04.678 iops : min=26645, max=34816, avg=30992.56, stdev=2611.37, samples=9 00:17:04.678 lat (usec) : 250=0.01% 00:17:04.678 lat (msec) : 2=75.54%, 4=24.40%, 10=0.06% 00:17:04.678 cpu : usr=30.54%, sys=68.46%, ctx=17, majf=0, minf=763 00:17:04.678 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:17:04.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:04.678 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:17:04.678 issued rwts: total=0,155650,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:04.678 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:04.678 00:17:04.678 Run status group 0 (all jobs): 00:17:04.678 WRITE: bw=122MiB/s (127MB/s), 122MiB/s-122MiB/s (127MB/s-127MB/s), io=608MiB (638MB), run=5001-5001msec 00:17:05.246 ----------------------------------------------------- 00:17:05.246 Suppressions used: 00:17:05.246 count bytes template 00:17:05.246 1 11 /usr/src/fio/parse.c 00:17:05.246 1 8 libtcmalloc_minimal.so 00:17:05.246 1 904 libcrypto.so 00:17:05.246 ----------------------------------------------------- 00:17:05.246 00:17:05.246 ************************************ 00:17:05.246 END TEST xnvme_fio_plugin 00:17:05.246 ************************************ 00:17:05.246 00:17:05.246 real 0m14.912s 00:17:05.246 user 0m6.918s 00:17:05.246 sys 0m7.637s 00:17:05.246 21:48:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:05.246 21:48:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:05.246 21:48:12 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:17:05.246 21:48:12 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:17:05.246 21:48:12 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:17:05.246 21:48:12 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:17:05.246 21:48:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:05.246 21:48:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:05.246 21:48:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:05.246 ************************************ 00:17:05.246 START TEST xnvme_rpc 00:17:05.246 ************************************ 00:17:05.246 21:48:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:17:05.246 21:48:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:17:05.246 21:48:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:17:05.246 21:48:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:17:05.246 21:48:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:17:05.246 21:48:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73201 00:17:05.246 21:48:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:05.247 21:48:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73201 00:17:05.247 21:48:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73201 ']' 00:17:05.247 21:48:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.247 21:48:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:05.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.247 21:48:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.247 21:48:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:05.247 21:48:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.505 [2024-12-10 21:48:13.021151] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:17:05.505 [2024-12-10 21:48:13.021301] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73201 ] 00:17:05.505 [2024-12-10 21:48:13.202406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.764 [2024-12-10 21:48:13.326180] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.701 xnvme_bdev 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73201 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73201 ']' 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73201 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:06.701 21:48:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73201 00:17:06.961 killing process with pid 73201 00:17:06.961 21:48:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:06.961 21:48:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:06.961 21:48:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73201' 00:17:06.961 21:48:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73201 00:17:06.961 21:48:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73201 00:17:09.501 00:17:09.501 real 0m3.987s 00:17:09.501 user 0m3.999s 00:17:09.501 sys 0m0.577s 00:17:09.501 ************************************ 00:17:09.501 END TEST xnvme_rpc 00:17:09.501 ************************************ 00:17:09.501 21:48:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:09.501 21:48:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.501 21:48:16 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:17:09.501 21:48:16 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:09.501 21:48:16 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:09.501 21:48:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:09.501 ************************************ 00:17:09.501 START TEST xnvme_bdevperf 00:17:09.501 ************************************ 00:17:09.501 21:48:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:17:09.501 21:48:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:17:09.501 21:48:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:17:09.501 21:48:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:09.501 21:48:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:17:09.501 21:48:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:09.501 21:48:16 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:09.501 21:48:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:09.501 { 00:17:09.501 "subsystems": [ 00:17:09.501 { 00:17:09.501 "subsystem": "bdev", 00:17:09.501 "config": [ 00:17:09.501 { 00:17:09.501 "params": { 00:17:09.501 "io_mechanism": "io_uring", 00:17:09.501 "conserve_cpu": true, 00:17:09.501 "filename": "/dev/nvme0n1", 00:17:09.501 "name": "xnvme_bdev" 00:17:09.501 }, 00:17:09.501 "method": "bdev_xnvme_create" 00:17:09.501 }, 00:17:09.501 { 00:17:09.501 "method": "bdev_wait_for_examine" 00:17:09.501 } 00:17:09.501 ] 00:17:09.501 } 00:17:09.501 ] 00:17:09.501 } 00:17:09.501 [2024-12-10 21:48:17.067357] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:17:09.501 [2024-12-10 21:48:17.067491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73286 ] 00:17:09.760 [2024-12-10 21:48:17.248970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.760 [2024-12-10 21:48:17.382166] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.019 Running I/O for 5 seconds... 00:17:12.335 37952.00 IOPS, 148.25 MiB/s [2024-12-10T21:48:21.035Z] 36672.00 IOPS, 143.25 MiB/s [2024-12-10T21:48:21.972Z] 37333.33 IOPS, 145.83 MiB/s [2024-12-10T21:48:22.909Z] 35168.00 IOPS, 137.38 MiB/s 00:17:15.178 Latency(us) 00:17:15.178 [2024-12-10T21:48:22.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.178 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:15.178 xnvme_bdev : 5.00 32819.15 128.20 0.00 0.00 1944.63 1013.31 8474.94 00:17:15.178 [2024-12-10T21:48:22.909Z] =================================================================================================================== 00:17:15.178 [2024-12-10T21:48:22.909Z] Total : 32819.15 128.20 0.00 0.00 1944.63 1013.31 8474.94 00:17:16.557 21:48:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:16.557 21:48:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:16.557 21:48:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:16.557 21:48:23 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:16.557 21:48:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:16.557 { 00:17:16.557 "subsystems": [ 00:17:16.557 { 00:17:16.557 "subsystem": "bdev", 00:17:16.557 "config": [ 00:17:16.557 { 00:17:16.557 "params": { 00:17:16.557 "io_mechanism": "io_uring", 00:17:16.557 "conserve_cpu": true, 00:17:16.557 "filename": "/dev/nvme0n1", 00:17:16.557 "name": "xnvme_bdev" 00:17:16.557 }, 00:17:16.557 "method": "bdev_xnvme_create" 00:17:16.557 }, 00:17:16.557 { 00:17:16.557 "method": "bdev_wait_for_examine" 00:17:16.557 } 00:17:16.557 ] 00:17:16.557 } 00:17:16.557 ] 00:17:16.557 } 00:17:16.557 [2024-12-10 21:48:23.972385] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:17:16.557 [2024-12-10 21:48:23.972515] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73367 ] 00:17:16.557 [2024-12-10 21:48:24.157053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.557 [2024-12-10 21:48:24.280402] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.126 Running I/O for 5 seconds... 00:17:19.043 23936.00 IOPS, 93.50 MiB/s [2024-12-10T21:48:27.710Z] 27776.00 IOPS, 108.50 MiB/s [2024-12-10T21:48:28.646Z] 30101.33 IOPS, 117.58 MiB/s [2024-12-10T21:48:30.024Z] 31024.00 IOPS, 121.19 MiB/s [2024-12-10T21:48:30.024Z] 31833.60 IOPS, 124.35 MiB/s 00:17:22.293 Latency(us) 00:17:22.293 [2024-12-10T21:48:30.024Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.293 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:22.293 xnvme_bdev : 5.00 31823.73 124.31 0.00 0.00 2005.25 1204.13 6316.72 00:17:22.293 [2024-12-10T21:48:30.024Z] =================================================================================================================== 00:17:22.293 [2024-12-10T21:48:30.024Z] Total : 31823.73 124.31 0.00 0.00 2005.25 1204.13 6316.72 00:17:23.228 00:17:23.228 real 0m13.818s 00:17:23.228 user 0m7.785s 00:17:23.228 sys 0m5.568s 00:17:23.228 ************************************ 00:17:23.228 END TEST xnvme_bdevperf 00:17:23.228 ************************************ 00:17:23.228 21:48:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:23.228 21:48:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:23.228 21:48:30 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:23.228 21:48:30 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:23.228 21:48:30 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:23.228 21:48:30 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:23.228 ************************************ 00:17:23.228 START TEST xnvme_fio_plugin 00:17:23.228 ************************************ 00:17:23.228 21:48:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:23.228 21:48:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:23.228 21:48:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:17:23.228 21:48:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:23.228 21:48:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:23.228 21:48:30 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:23.228 21:48:30 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:23.229 21:48:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:23.229 21:48:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:23.229 21:48:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:23.229 21:48:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:23.229 21:48:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:23.229 21:48:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:23.229 21:48:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:23.229 21:48:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:23.229 21:48:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:23.229 21:48:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:23.229 21:48:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:23.229 21:48:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:23.229 21:48:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:23.229 21:48:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:23.229 21:48:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:23.229 21:48:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:23.229 21:48:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:23.229 { 00:17:23.229 "subsystems": [ 00:17:23.229 { 00:17:23.229 "subsystem": "bdev", 00:17:23.229 "config": [ 00:17:23.229 { 00:17:23.229 "params": { 00:17:23.229 "io_mechanism": "io_uring", 00:17:23.229 "conserve_cpu": true, 00:17:23.229 "filename": "/dev/nvme0n1", 00:17:23.229 "name": "xnvme_bdev" 00:17:23.229 }, 00:17:23.229 "method": "bdev_xnvme_create" 00:17:23.229 }, 00:17:23.229 { 00:17:23.229 "method": "bdev_wait_for_examine" 00:17:23.229 } 00:17:23.229 ] 00:17:23.229 } 00:17:23.229 ] 00:17:23.229 } 00:17:23.487 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:23.487 fio-3.35 00:17:23.487 Starting 1 thread 00:17:30.055 00:17:30.055 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73496: Tue Dec 10 21:48:36 2024 00:17:30.055 read: IOPS=25.8k, BW=101MiB/s (105MB/s)(503MiB/5002msec) 00:17:30.055 slat (nsec): min=2528, max=86603, avg=7253.38, stdev=3441.58 00:17:30.055 clat (usec): min=1093, max=6976, avg=2196.95, stdev=439.27 00:17:30.055 lat (usec): min=1097, max=6981, avg=2204.20, stdev=441.18 00:17:30.055 clat percentiles (usec): 00:17:30.055 | 1.00th=[ 1237], 5.00th=[ 1401], 10.00th=[ 1532], 20.00th=[ 1762], 00:17:30.055 | 30.00th=[ 1991], 40.00th=[ 2147], 50.00th=[ 2278], 60.00th=[ 2376], 00:17:30.055 | 70.00th=[ 2474], 80.00th=[ 2573], 90.00th=[ 2704], 95.00th=[ 2802], 00:17:30.055 | 99.00th=[ 2999], 99.50th=[ 3130], 99.90th=[ 3523], 99.95th=[ 3884], 00:17:30.055 | 99.99th=[ 4293] 00:17:30.055 bw ( KiB/s): min=87552, max=114176, per=96.58%, avg=99497.78, stdev=9263.07, samples=9 00:17:30.055 iops : min=21888, max=28544, avg=24874.44, stdev=2315.77, samples=9 00:17:30.055 lat (msec) : 2=30.22%, 4=69.74%, 10=0.05% 00:17:30.056 cpu : usr=42.21%, sys=53.31%, ctx=12, majf=0, minf=762 00:17:30.056 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:17:30.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:30.056 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:17:30.056 issued rwts: total=128831,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:30.056 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:30.056 00:17:30.056 Run status group 0 (all jobs): 00:17:30.056 READ: bw=101MiB/s (105MB/s), 101MiB/s-101MiB/s (105MB/s-105MB/s), io=503MiB (528MB), run=5002-5002msec 00:17:30.673 ----------------------------------------------------- 00:17:30.673 Suppressions used: 00:17:30.673 count bytes template 00:17:30.673 1 11 /usr/src/fio/parse.c 00:17:30.673 1 8 libtcmalloc_minimal.so 00:17:30.673 1 904 libcrypto.so 00:17:30.673 ----------------------------------------------------- 00:17:30.673 00:17:30.931 21:48:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:30.931 21:48:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:30.931 21:48:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:30.931 21:48:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:30.931 21:48:38 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:30.931 21:48:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:30.931 21:48:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:30.932 21:48:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:30.932 21:48:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:30.932 21:48:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:30.932 21:48:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:30.932 21:48:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:30.932 21:48:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:30.932 21:48:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:30.932 21:48:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:30.932 21:48:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:30.932 { 00:17:30.932 "subsystems": [ 00:17:30.932 { 00:17:30.932 "subsystem": "bdev", 00:17:30.932 "config": [ 00:17:30.932 { 00:17:30.932 "params": { 00:17:30.932 "io_mechanism": "io_uring", 00:17:30.932 "conserve_cpu": true, 00:17:30.932 "filename": "/dev/nvme0n1", 00:17:30.932 "name": "xnvme_bdev" 00:17:30.932 }, 00:17:30.932 "method": "bdev_xnvme_create" 00:17:30.932 }, 00:17:30.932 { 00:17:30.932 "method": "bdev_wait_for_examine" 00:17:30.932 } 00:17:30.932 ] 00:17:30.932 } 00:17:30.932 ] 00:17:30.932 } 00:17:30.932 21:48:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:30.932 21:48:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:30.932 21:48:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:30.932 21:48:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:30.932 21:48:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:30.932 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:30.932 fio-3.35 00:17:30.932 Starting 1 thread 00:17:37.496 00:17:37.496 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73594: Tue Dec 10 21:48:44 2024 00:17:37.496 write: IOPS=31.9k, BW=125MiB/s (131MB/s)(624MiB/5002msec); 0 zone resets 00:17:37.496 slat (nsec): min=3564, max=56793, avg=5379.62, stdev=2037.90 00:17:37.496 clat (usec): min=639, max=5067, avg=1791.26, stdev=302.01 00:17:37.496 lat (usec): min=645, max=5071, avg=1796.64, stdev=303.15 00:17:37.496 clat percentiles (usec): 00:17:37.496 | 1.00th=[ 1336], 5.00th=[ 1418], 10.00th=[ 1467], 20.00th=[ 1532], 00:17:37.496 | 30.00th=[ 1598], 40.00th=[ 1647], 50.00th=[ 1713], 60.00th=[ 1795], 00:17:37.496 | 70.00th=[ 1909], 80.00th=[ 2057], 90.00th=[ 2245], 95.00th=[ 2376], 00:17:37.496 | 99.00th=[ 2638], 99.50th=[ 2704], 99.90th=[ 2802], 99.95th=[ 2868], 00:17:37.496 | 99.99th=[ 2966] 00:17:37.496 bw ( KiB/s): min=111104, max=148992, per=100.00%, avg=129535.11, stdev=12193.34, samples=9 00:17:37.496 iops : min=27776, max=37248, avg=32383.78, stdev=3048.34, samples=9 00:17:37.496 lat (usec) : 750=0.01% 00:17:37.496 lat (msec) : 2=76.37%, 4=23.63%, 10=0.01% 00:17:37.496 cpu : usr=48.09%, sys=48.55%, ctx=12, majf=0, minf=763 00:17:37.496 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:17:37.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:37.496 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:17:37.496 issued rwts: total=0,159743,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:37.496 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:37.496 00:17:37.496 Run status group 0 (all jobs): 00:17:37.496 WRITE: bw=125MiB/s (131MB/s), 125MiB/s-125MiB/s (131MB/s-131MB/s), io=624MiB (654MB), run=5002-5002msec 00:17:38.065 ----------------------------------------------------- 00:17:38.065 Suppressions used: 00:17:38.065 count bytes template 00:17:38.065 1 11 /usr/src/fio/parse.c 00:17:38.065 1 8 libtcmalloc_minimal.so 00:17:38.065 1 904 libcrypto.so 00:17:38.065 ----------------------------------------------------- 00:17:38.065 00:17:38.324 ************************************ 00:17:38.324 END TEST xnvme_fio_plugin 00:17:38.324 ************************************ 00:17:38.324 00:17:38.324 real 0m14.960s 00:17:38.324 user 0m8.425s 00:17:38.324 sys 0m5.889s 00:17:38.324 21:48:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:38.324 21:48:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:38.324 21:48:45 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:17:38.324 21:48:45 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:17:38.325 21:48:45 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:17:38.325 21:48:45 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:17:38.325 21:48:45 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:17:38.325 21:48:45 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:17:38.325 21:48:45 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:17:38.325 21:48:45 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:17:38.325 21:48:45 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:17:38.325 21:48:45 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:38.325 21:48:45 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:38.325 21:48:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:38.325 ************************************ 00:17:38.325 START TEST xnvme_rpc 00:17:38.325 ************************************ 00:17:38.325 21:48:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:17:38.325 21:48:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:17:38.325 21:48:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:17:38.325 21:48:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:17:38.325 21:48:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:17:38.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.325 21:48:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73676 00:17:38.325 21:48:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:38.325 21:48:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73676 00:17:38.325 21:48:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73676 ']' 00:17:38.325 21:48:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.325 21:48:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:38.325 21:48:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.325 21:48:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:38.325 21:48:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:38.325 [2024-12-10 21:48:46.017396] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:17:38.325 [2024-12-10 21:48:46.017758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73676 ] 00:17:38.584 [2024-12-10 21:48:46.198862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.842 [2024-12-10 21:48:46.331520] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.779 xnvme_bdev 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73676 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73676 ']' 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73676 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73676 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:39.779 killing process with pid 73676 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73676' 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73676 00:17:39.779 21:48:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73676 00:17:42.311 00:17:42.311 real 0m3.988s 00:17:42.311 user 0m4.030s 00:17:42.311 sys 0m0.558s 00:17:42.311 ************************************ 00:17:42.311 END TEST xnvme_rpc 00:17:42.311 ************************************ 00:17:42.311 21:48:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:42.311 21:48:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:42.311 21:48:49 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:17:42.311 21:48:49 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:42.311 21:48:49 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:42.311 21:48:49 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:42.311 ************************************ 00:17:42.311 START TEST xnvme_bdevperf 00:17:42.311 ************************************ 00:17:42.311 21:48:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:17:42.311 21:48:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:17:42.311 21:48:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:17:42.311 21:48:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:42.311 21:48:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:17:42.311 21:48:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:42.311 21:48:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:42.311 21:48:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:42.311 { 00:17:42.311 "subsystems": [ 00:17:42.311 { 00:17:42.311 "subsystem": "bdev", 00:17:42.311 "config": [ 00:17:42.311 { 00:17:42.311 "params": { 00:17:42.311 "io_mechanism": "io_uring_cmd", 00:17:42.311 "conserve_cpu": false, 00:17:42.311 "filename": "/dev/ng0n1", 00:17:42.311 "name": "xnvme_bdev" 00:17:42.311 }, 00:17:42.311 "method": "bdev_xnvme_create" 00:17:42.311 }, 00:17:42.311 { 00:17:42.311 "method": "bdev_wait_for_examine" 00:17:42.311 } 00:17:42.311 ] 00:17:42.311 } 00:17:42.311 ] 00:17:42.311 } 00:17:42.602 [2024-12-10 21:48:50.061851] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:17:42.603 [2024-12-10 21:48:50.061989] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73761 ] 00:17:42.603 [2024-12-10 21:48:50.244710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.877 [2024-12-10 21:48:50.362340] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.136 Running I/O for 5 seconds... 00:17:45.012 28032.00 IOPS, 109.50 MiB/s [2024-12-10T21:48:54.120Z] 27040.00 IOPS, 105.62 MiB/s [2024-12-10T21:48:55.057Z] 26752.00 IOPS, 104.50 MiB/s [2024-12-10T21:48:55.992Z] 26432.00 IOPS, 103.25 MiB/s 00:17:48.261 Latency(us) 00:17:48.261 [2024-12-10T21:48:55.992Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.261 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:48.261 xnvme_bdev : 5.00 26462.07 103.37 0.00 0.00 2411.05 1217.29 7264.23 00:17:48.261 [2024-12-10T21:48:55.992Z] =================================================================================================================== 00:17:48.261 [2024-12-10T21:48:55.992Z] Total : 26462.07 103.37 0.00 0.00 2411.05 1217.29 7264.23 00:17:49.201 21:48:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:49.201 21:48:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:49.201 21:48:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:49.201 21:48:56 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:49.201 21:48:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:49.201 { 00:17:49.201 "subsystems": [ 00:17:49.201 { 00:17:49.201 "subsystem": "bdev", 00:17:49.201 "config": [ 00:17:49.201 { 00:17:49.201 "params": { 00:17:49.201 "io_mechanism": "io_uring_cmd", 00:17:49.201 "conserve_cpu": false, 00:17:49.201 "filename": "/dev/ng0n1", 00:17:49.201 "name": "xnvme_bdev" 00:17:49.201 }, 00:17:49.201 "method": "bdev_xnvme_create" 00:17:49.201 }, 00:17:49.201 { 00:17:49.201 "method": "bdev_wait_for_examine" 00:17:49.201 } 00:17:49.201 ] 00:17:49.201 } 00:17:49.201 ] 00:17:49.201 } 00:17:49.460 [2024-12-10 21:48:56.973304] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:17:49.460 [2024-12-10 21:48:56.973612] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73843 ] 00:17:49.460 [2024-12-10 21:48:57.154880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.719 [2024-12-10 21:48:57.279260] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.978 Running I/O for 5 seconds... 00:17:52.295 28288.00 IOPS, 110.50 MiB/s [2024-12-10T21:49:00.963Z] 27136.00 IOPS, 106.00 MiB/s [2024-12-10T21:49:01.934Z] 26709.33 IOPS, 104.33 MiB/s [2024-12-10T21:49:02.869Z] 27552.00 IOPS, 107.62 MiB/s 00:17:55.138 Latency(us) 00:17:55.138 [2024-12-10T21:49:02.869Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.138 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:55.138 xnvme_bdev : 5.00 27669.21 108.08 0.00 0.00 2305.78 1000.15 6106.17 00:17:55.138 [2024-12-10T21:49:02.869Z] =================================================================================================================== 00:17:55.138 [2024-12-10T21:49:02.869Z] Total : 27669.21 108.08 0.00 0.00 2305.78 1000.15 6106.17 00:17:56.515 21:49:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:56.515 21:49:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:17:56.515 21:49:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:56.515 21:49:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:56.515 21:49:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:56.515 { 00:17:56.515 "subsystems": [ 00:17:56.515 { 00:17:56.515 "subsystem": "bdev", 00:17:56.515 "config": [ 00:17:56.515 { 00:17:56.515 "params": { 00:17:56.515 "io_mechanism": "io_uring_cmd", 00:17:56.515 "conserve_cpu": false, 00:17:56.515 "filename": "/dev/ng0n1", 00:17:56.515 "name": "xnvme_bdev" 00:17:56.515 }, 00:17:56.515 "method": "bdev_xnvme_create" 00:17:56.515 }, 00:17:56.515 { 00:17:56.515 "method": "bdev_wait_for_examine" 00:17:56.515 } 00:17:56.515 ] 00:17:56.515 } 00:17:56.515 ] 00:17:56.515 } 00:17:56.515 [2024-12-10 21:49:03.921083] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:17:56.515 [2024-12-10 21:49:03.921477] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73923 ] 00:17:56.515 [2024-12-10 21:49:04.107329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.775 [2024-12-10 21:49:04.247924] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.034 Running I/O for 5 seconds... 00:17:58.907 69504.00 IOPS, 271.50 MiB/s [2024-12-10T21:49:08.016Z] 69504.00 IOPS, 271.50 MiB/s [2024-12-10T21:49:08.954Z] 70378.67 IOPS, 274.92 MiB/s [2024-12-10T21:49:09.890Z] 70720.00 IOPS, 276.25 MiB/s [2024-12-10T21:49:09.890Z] 70656.00 IOPS, 276.00 MiB/s 00:18:02.159 Latency(us) 00:18:02.159 [2024-12-10T21:49:09.890Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.159 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:18:02.159 xnvme_bdev : 5.00 70632.82 275.91 0.00 0.00 903.42 651.41 2408.25 00:18:02.159 [2024-12-10T21:49:09.890Z] =================================================================================================================== 00:18:02.159 [2024-12-10T21:49:09.890Z] Total : 70632.82 275.91 0.00 0.00 903.42 651.41 2408.25 00:18:03.096 21:49:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:03.096 21:49:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:18:03.096 21:49:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:03.096 21:49:10 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:03.096 21:49:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:03.096 { 00:18:03.096 "subsystems": [ 00:18:03.096 { 00:18:03.096 "subsystem": "bdev", 00:18:03.096 "config": [ 00:18:03.096 { 00:18:03.096 "params": { 00:18:03.096 "io_mechanism": "io_uring_cmd", 00:18:03.096 "conserve_cpu": false, 00:18:03.096 "filename": "/dev/ng0n1", 00:18:03.096 "name": "xnvme_bdev" 00:18:03.096 }, 00:18:03.096 "method": "bdev_xnvme_create" 00:18:03.096 }, 00:18:03.096 { 00:18:03.096 "method": "bdev_wait_for_examine" 00:18:03.096 } 00:18:03.096 ] 00:18:03.096 } 00:18:03.096 ] 00:18:03.096 } 00:18:03.355 [2024-12-10 21:49:10.866980] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:18:03.355 [2024-12-10 21:49:10.867267] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73999 ] 00:18:03.355 [2024-12-10 21:49:11.049780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.614 [2024-12-10 21:49:11.187313] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.873 Running I/O for 5 seconds... 00:18:06.186 68376.00 IOPS, 267.09 MiB/s [2024-12-10T21:49:14.854Z] 70041.50 IOPS, 273.60 MiB/s [2024-12-10T21:49:15.788Z] 68412.33 IOPS, 267.24 MiB/s [2024-12-10T21:49:16.772Z] 66998.00 IOPS, 261.71 MiB/s 00:18:09.041 Latency(us) 00:18:09.041 [2024-12-10T21:49:16.772Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.041 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:18:09.041 xnvme_bdev : 5.00 59059.15 230.70 0.00 0.00 1080.27 213.85 7369.51 00:18:09.041 [2024-12-10T21:49:16.772Z] =================================================================================================================== 00:18:09.041 [2024-12-10T21:49:16.772Z] Total : 59059.15 230.70 0.00 0.00 1080.27 213.85 7369.51 00:18:10.419 00:18:10.419 real 0m27.771s 00:18:10.419 user 0m14.203s 00:18:10.419 sys 0m13.159s 00:18:10.419 ************************************ 00:18:10.419 END TEST xnvme_bdevperf 00:18:10.419 ************************************ 00:18:10.419 21:49:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:10.419 21:49:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:10.419 21:49:17 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:18:10.419 21:49:17 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:10.419 21:49:17 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:10.419 21:49:17 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:10.419 ************************************ 00:18:10.419 START TEST xnvme_fio_plugin 00:18:10.419 ************************************ 00:18:10.419 21:49:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:18:10.419 21:49:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:18:10.419 21:49:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:18:10.419 21:49:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:10.419 21:49:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:10.419 21:49:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:10.419 21:49:17 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:10.419 21:49:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:10.419 21:49:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:10.419 21:49:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:10.419 21:49:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:10.419 21:49:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:10.419 21:49:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:10.419 21:49:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:10.419 21:49:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:10.419 21:49:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:10.419 21:49:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:10.419 21:49:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:10.419 21:49:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:10.419 21:49:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:10.419 21:49:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:10.419 21:49:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:10.419 21:49:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:10.420 21:49:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:10.420 { 00:18:10.420 "subsystems": [ 00:18:10.420 { 00:18:10.420 "subsystem": "bdev", 00:18:10.420 "config": [ 00:18:10.420 { 00:18:10.420 "params": { 00:18:10.420 "io_mechanism": "io_uring_cmd", 00:18:10.420 "conserve_cpu": false, 00:18:10.420 "filename": "/dev/ng0n1", 00:18:10.420 "name": "xnvme_bdev" 00:18:10.420 }, 00:18:10.420 "method": "bdev_xnvme_create" 00:18:10.420 }, 00:18:10.420 { 00:18:10.420 "method": "bdev_wait_for_examine" 00:18:10.420 } 00:18:10.420 ] 00:18:10.420 } 00:18:10.420 ] 00:18:10.420 } 00:18:10.420 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:10.420 fio-3.35 00:18:10.420 Starting 1 thread 00:18:16.985 00:18:16.985 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=74123: Tue Dec 10 21:49:23 2024 00:18:16.985 read: IOPS=27.0k, BW=105MiB/s (111MB/s)(527MiB/5002msec) 00:18:16.985 slat (nsec): min=4062, max=76411, avg=6985.49, stdev=2096.92 00:18:16.985 clat (usec): min=1110, max=3411, avg=2098.17, stdev=253.87 00:18:16.985 lat (usec): min=1117, max=3423, avg=2105.15, stdev=254.49 00:18:16.985 clat percentiles (usec): 00:18:16.985 | 1.00th=[ 1450], 5.00th=[ 1663], 10.00th=[ 1778], 20.00th=[ 1909], 00:18:16.985 | 30.00th=[ 1975], 40.00th=[ 2040], 50.00th=[ 2114], 60.00th=[ 2180], 00:18:16.985 | 70.00th=[ 2245], 80.00th=[ 2311], 90.00th=[ 2409], 95.00th=[ 2474], 00:18:16.985 | 99.00th=[ 2704], 99.50th=[ 2868], 99.90th=[ 3130], 99.95th=[ 3195], 00:18:16.985 | 99.99th=[ 3326] 00:18:16.985 bw ( KiB/s): min=102912, max=116736, per=99.05%, avg=106951.11, stdev=4539.54, samples=9 00:18:16.985 iops : min=25728, max=29184, avg=26737.78, stdev=1134.89, samples=9 00:18:16.985 lat (msec) : 2=32.91%, 4=67.09% 00:18:16.985 cpu : usr=36.63%, sys=62.19%, ctx=9, majf=0, minf=762 00:18:16.985 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:16.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.985 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:18:16.985 issued rwts: total=135023,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:16.985 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:16.985 00:18:16.985 Run status group 0 (all jobs): 00:18:16.985 READ: bw=105MiB/s (111MB/s), 105MiB/s-105MiB/s (111MB/s-111MB/s), io=527MiB (553MB), run=5002-5002msec 00:18:17.552 ----------------------------------------------------- 00:18:17.552 Suppressions used: 00:18:17.552 count bytes template 00:18:17.552 1 11 /usr/src/fio/parse.c 00:18:17.552 1 8 libtcmalloc_minimal.so 00:18:17.552 1 904 libcrypto.so 00:18:17.552 ----------------------------------------------------- 00:18:17.552 00:18:17.810 21:49:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:17.810 21:49:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:17.810 21:49:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:17.810 21:49:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:17.810 21:49:25 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:17.811 21:49:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:17.811 21:49:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:17.811 21:49:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:17.811 21:49:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:17.811 21:49:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:17.811 21:49:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:17.811 21:49:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:17.811 21:49:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:17.811 21:49:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:17.811 21:49:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:17.811 21:49:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:17.811 21:49:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:17.811 21:49:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:17.811 21:49:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:17.811 21:49:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:17.811 21:49:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:17.811 { 00:18:17.811 "subsystems": [ 00:18:17.811 { 00:18:17.811 "subsystem": "bdev", 00:18:17.811 "config": [ 00:18:17.811 { 00:18:17.811 "params": { 00:18:17.811 "io_mechanism": "io_uring_cmd", 00:18:17.811 "conserve_cpu": false, 00:18:17.811 "filename": "/dev/ng0n1", 00:18:17.811 "name": "xnvme_bdev" 00:18:17.811 }, 00:18:17.811 "method": "bdev_xnvme_create" 00:18:17.811 }, 00:18:17.811 { 00:18:17.811 "method": "bdev_wait_for_examine" 00:18:17.811 } 00:18:17.811 ] 00:18:17.811 } 00:18:17.811 ] 00:18:17.811 } 00:18:18.069 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:18.069 fio-3.35 00:18:18.069 Starting 1 thread 00:18:24.659 00:18:24.659 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=74219: Tue Dec 10 21:49:31 2024 00:18:24.659 write: IOPS=28.0k, BW=109MiB/s (115MB/s)(547MiB/5002msec); 0 zone resets 00:18:24.659 slat (usec): min=3, max=125, avg= 6.81, stdev= 2.18 00:18:24.659 clat (usec): min=203, max=9036, avg=2020.91, stdev=249.55 00:18:24.659 lat (usec): min=210, max=9041, avg=2027.73, stdev=250.18 00:18:24.659 clat percentiles (usec): 00:18:24.659 | 1.00th=[ 1549], 5.00th=[ 1680], 10.00th=[ 1745], 20.00th=[ 1827], 00:18:24.659 | 30.00th=[ 1893], 40.00th=[ 1942], 50.00th=[ 2008], 60.00th=[ 2057], 00:18:24.659 | 70.00th=[ 2114], 80.00th=[ 2212], 90.00th=[ 2311], 95.00th=[ 2409], 00:18:24.659 | 99.00th=[ 2606], 99.50th=[ 2671], 99.90th=[ 2868], 99.95th=[ 3163], 00:18:24.659 | 99.99th=[ 8979] 00:18:24.659 bw ( KiB/s): min=107008, max=119296, per=100.00%, avg=112037.00, stdev=3892.12, samples=9 00:18:24.659 iops : min=26752, max=29824, avg=28009.22, stdev=973.05, samples=9 00:18:24.659 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.03% 00:18:24.659 lat (msec) : 2=48.93%, 4=50.97%, 10=0.04% 00:18:24.659 cpu : usr=34.99%, sys=63.91%, ctx=11, majf=0, minf=763 00:18:24.659 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:18:24.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.659 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:18:24.659 issued rwts: total=0,139951,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:24.659 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:24.659 00:18:24.659 Run status group 0 (all jobs): 00:18:24.659 WRITE: bw=109MiB/s (115MB/s), 109MiB/s-109MiB/s (115MB/s-115MB/s), io=547MiB (573MB), run=5002-5002msec 00:18:25.276 ----------------------------------------------------- 00:18:25.276 Suppressions used: 00:18:25.276 count bytes template 00:18:25.276 1 11 /usr/src/fio/parse.c 00:18:25.276 1 8 libtcmalloc_minimal.so 00:18:25.276 1 904 libcrypto.so 00:18:25.276 ----------------------------------------------------- 00:18:25.276 00:18:25.276 ************************************ 00:18:25.276 END TEST xnvme_fio_plugin 00:18:25.276 ************************************ 00:18:25.276 00:18:25.276 real 0m14.895s 00:18:25.276 user 0m7.417s 00:18:25.276 sys 0m7.103s 00:18:25.276 21:49:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:25.276 21:49:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:25.276 21:49:32 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:18:25.276 21:49:32 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:18:25.276 21:49:32 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:18:25.276 21:49:32 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:18:25.276 21:49:32 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:25.276 21:49:32 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:25.276 21:49:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:25.277 ************************************ 00:18:25.277 START TEST xnvme_rpc 00:18:25.277 ************************************ 00:18:25.277 21:49:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:18:25.277 21:49:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:18:25.277 21:49:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:18:25.277 21:49:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:18:25.277 21:49:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:18:25.277 21:49:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=74310 00:18:25.277 21:49:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:25.277 21:49:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 74310 00:18:25.277 21:49:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 74310 ']' 00:18:25.277 21:49:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.277 21:49:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:25.277 21:49:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.277 21:49:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:25.277 21:49:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.277 [2024-12-10 21:49:32.892854] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:18:25.277 [2024-12-10 21:49:32.893000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74310 ] 00:18:25.536 [2024-12-10 21:49:33.075252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.536 [2024-12-10 21:49:33.197908] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.474 21:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:26.474 21:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:26.474 21:49:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:18:26.474 21:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.474 21:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:26.474 xnvme_bdev 00:18:26.474 21:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.474 21:49:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:18:26.474 21:49:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:26.474 21:49:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:18:26.474 21:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.474 21:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:26.474 21:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.474 21:49:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:18:26.474 21:49:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:18:26.474 21:49:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:18:26.474 21:49:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:26.474 21:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.474 21:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:26.474 21:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.474 21:49:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:18:26.474 21:49:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:18:26.474 21:49:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:26.474 21:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.474 21:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:26.474 21:49:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:18:26.474 21:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.474 21:49:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:18:26.734 21:49:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:18:26.734 21:49:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:18:26.734 21:49:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:26.734 21:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.734 21:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:26.734 21:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.734 21:49:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:18:26.734 21:49:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:18:26.734 21:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.734 21:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:26.734 21:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.734 21:49:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 74310 00:18:26.734 21:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 74310 ']' 00:18:26.734 21:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 74310 00:18:26.734 21:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:18:26.734 21:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:26.734 21:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74310 00:18:26.734 killing process with pid 74310 00:18:26.734 21:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:26.734 21:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:26.734 21:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74310' 00:18:26.734 21:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 74310 00:18:26.734 21:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 74310 00:18:29.269 ************************************ 00:18:29.269 END TEST xnvme_rpc 00:18:29.269 ************************************ 00:18:29.269 00:18:29.269 real 0m3.843s 00:18:29.269 user 0m3.870s 00:18:29.269 sys 0m0.600s 00:18:29.269 21:49:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:29.269 21:49:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:29.269 21:49:36 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:18:29.269 21:49:36 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:29.269 21:49:36 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:29.269 21:49:36 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:29.269 ************************************ 00:18:29.269 START TEST xnvme_bdevperf 00:18:29.269 ************************************ 00:18:29.269 21:49:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:18:29.269 21:49:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:18:29.269 21:49:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:18:29.269 21:49:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:29.269 21:49:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:18:29.269 21:49:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:29.269 21:49:36 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:29.269 21:49:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:29.269 { 00:18:29.269 "subsystems": [ 00:18:29.269 { 00:18:29.269 "subsystem": "bdev", 00:18:29.269 "config": [ 00:18:29.269 { 00:18:29.269 "params": { 00:18:29.269 "io_mechanism": "io_uring_cmd", 00:18:29.269 "conserve_cpu": true, 00:18:29.269 "filename": "/dev/ng0n1", 00:18:29.269 "name": "xnvme_bdev" 00:18:29.269 }, 00:18:29.269 "method": "bdev_xnvme_create" 00:18:29.269 }, 00:18:29.269 { 00:18:29.269 "method": "bdev_wait_for_examine" 00:18:29.269 } 00:18:29.269 ] 00:18:29.269 } 00:18:29.269 ] 00:18:29.269 } 00:18:29.269 [2024-12-10 21:49:36.797744] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:18:29.269 [2024-12-10 21:49:36.798169] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74384 ] 00:18:29.269 [2024-12-10 21:49:36.983147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.528 [2024-12-10 21:49:37.101632] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.786 Running I/O for 5 seconds... 00:18:32.110 26688.00 IOPS, 104.25 MiB/s [2024-12-10T21:49:40.790Z] 26848.00 IOPS, 104.88 MiB/s [2024-12-10T21:49:41.726Z] 27392.00 IOPS, 107.00 MiB/s [2024-12-10T21:49:42.660Z] 27648.00 IOPS, 108.00 MiB/s 00:18:34.929 Latency(us) 00:18:34.929 [2024-12-10T21:49:42.660Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.929 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:18:34.930 xnvme_bdev : 5.00 27834.79 108.73 0.00 0.00 2292.22 1052.79 8264.38 00:18:34.930 [2024-12-10T21:49:42.661Z] =================================================================================================================== 00:18:34.930 [2024-12-10T21:49:42.661Z] Total : 27834.79 108.73 0.00 0.00 2292.22 1052.79 8264.38 00:18:35.866 21:49:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:35.866 21:49:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:18:35.866 21:49:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:35.866 21:49:43 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:35.866 21:49:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:36.125 { 00:18:36.125 "subsystems": [ 00:18:36.125 { 00:18:36.125 "subsystem": "bdev", 00:18:36.125 "config": [ 00:18:36.125 { 00:18:36.125 "params": { 00:18:36.125 "io_mechanism": "io_uring_cmd", 00:18:36.125 "conserve_cpu": true, 00:18:36.125 "filename": "/dev/ng0n1", 00:18:36.125 "name": "xnvme_bdev" 00:18:36.125 }, 00:18:36.125 "method": "bdev_xnvme_create" 00:18:36.125 }, 00:18:36.125 { 00:18:36.125 "method": "bdev_wait_for_examine" 00:18:36.125 } 00:18:36.125 ] 00:18:36.125 } 00:18:36.125 ] 00:18:36.125 } 00:18:36.125 [2024-12-10 21:49:43.673960] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:18:36.126 [2024-12-10 21:49:43.674129] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74465 ] 00:18:36.385 [2024-12-10 21:49:43.857685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.385 [2024-12-10 21:49:43.984647] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.644 Running I/O for 5 seconds... 00:18:38.960 31424.00 IOPS, 122.75 MiB/s [2024-12-10T21:49:47.628Z] 29696.00 IOPS, 116.00 MiB/s [2024-12-10T21:49:48.564Z] 27754.67 IOPS, 108.42 MiB/s [2024-12-10T21:49:49.499Z] 27296.00 IOPS, 106.62 MiB/s 00:18:41.768 Latency(us) 00:18:41.768 [2024-12-10T21:49:49.499Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.768 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:18:41.769 xnvme_bdev : 5.00 27719.24 108.28 0.00 0.00 2301.45 848.81 8053.82 00:18:41.769 [2024-12-10T21:49:49.500Z] =================================================================================================================== 00:18:41.769 [2024-12-10T21:49:49.500Z] Total : 27719.24 108.28 0.00 0.00 2301.45 848.81 8053.82 00:18:43.144 21:49:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:43.144 21:49:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:18:43.144 21:49:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:43.144 21:49:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:43.144 21:49:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:43.144 { 00:18:43.144 "subsystems": [ 00:18:43.144 { 00:18:43.144 "subsystem": "bdev", 00:18:43.144 "config": [ 00:18:43.144 { 00:18:43.144 "params": { 00:18:43.144 "io_mechanism": "io_uring_cmd", 00:18:43.144 "conserve_cpu": true, 00:18:43.144 "filename": "/dev/ng0n1", 00:18:43.144 "name": "xnvme_bdev" 00:18:43.144 }, 00:18:43.144 "method": "bdev_xnvme_create" 00:18:43.144 }, 00:18:43.144 { 00:18:43.144 "method": "bdev_wait_for_examine" 00:18:43.144 } 00:18:43.144 ] 00:18:43.144 } 00:18:43.144 ] 00:18:43.144 } 00:18:43.144 [2024-12-10 21:49:50.577028] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:18:43.144 [2024-12-10 21:49:50.577177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74546 ] 00:18:43.144 [2024-12-10 21:49:50.759730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.403 [2024-12-10 21:49:50.891218] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.662 Running I/O for 5 seconds... 00:18:45.534 70080.00 IOPS, 273.75 MiB/s [2024-12-10T21:49:54.668Z] 70912.00 IOPS, 277.00 MiB/s [2024-12-10T21:49:55.606Z] 71189.33 IOPS, 278.08 MiB/s [2024-12-10T21:49:56.542Z] 71328.00 IOPS, 278.62 MiB/s [2024-12-10T21:49:56.542Z] 71398.40 IOPS, 278.90 MiB/s 00:18:48.811 Latency(us) 00:18:48.811 [2024-12-10T21:49:56.542Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.811 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:18:48.811 xnvme_bdev : 5.00 71386.56 278.85 0.00 0.00 893.80 598.77 2684.61 00:18:48.811 [2024-12-10T21:49:56.542Z] =================================================================================================================== 00:18:48.811 [2024-12-10T21:49:56.542Z] Total : 71386.56 278.85 0.00 0.00 893.80 598.77 2684.61 00:18:49.747 21:49:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:49.747 21:49:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:18:49.747 21:49:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:49.747 21:49:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:49.747 21:49:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:49.747 { 00:18:49.747 "subsystems": [ 00:18:49.747 { 00:18:49.747 "subsystem": "bdev", 00:18:49.747 "config": [ 00:18:49.747 { 00:18:49.747 "params": { 00:18:49.747 "io_mechanism": "io_uring_cmd", 00:18:49.747 "conserve_cpu": true, 00:18:49.747 "filename": "/dev/ng0n1", 00:18:49.747 "name": "xnvme_bdev" 00:18:49.747 }, 00:18:49.747 "method": "bdev_xnvme_create" 00:18:49.747 }, 00:18:49.747 { 00:18:49.747 "method": "bdev_wait_for_examine" 00:18:49.747 } 00:18:49.747 ] 00:18:49.747 } 00:18:49.747 ] 00:18:49.747 } 00:18:49.747 [2024-12-10 21:49:57.469662] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:18:49.747 [2024-12-10 21:49:57.470684] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74620 ] 00:18:50.005 [2024-12-10 21:49:57.669690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.263 [2024-12-10 21:49:57.796192] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.521 Running I/O for 5 seconds... 00:18:52.832 55092.00 IOPS, 215.20 MiB/s [2024-12-10T21:50:01.495Z] 55701.50 IOPS, 217.58 MiB/s [2024-12-10T21:50:02.432Z] 55153.00 IOPS, 215.44 MiB/s [2024-12-10T21:50:03.367Z] 54226.50 IOPS, 211.82 MiB/s [2024-12-10T21:50:03.367Z] 53440.00 IOPS, 208.75 MiB/s 00:18:55.636 Latency(us) 00:18:55.636 [2024-12-10T21:50:03.367Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.636 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:18:55.636 xnvme_bdev : 5.01 53377.46 208.51 0.00 0.00 1193.67 93.76 24214.10 00:18:55.636 [2024-12-10T21:50:03.367Z] =================================================================================================================== 00:18:55.636 [2024-12-10T21:50:03.367Z] Total : 53377.46 208.51 0.00 0.00 1193.67 93.76 24214.10 00:18:56.573 00:18:56.573 real 0m27.598s 00:18:56.573 user 0m16.420s 00:18:56.573 sys 0m8.659s 00:18:56.573 21:50:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:56.573 21:50:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:56.573 ************************************ 00:18:56.573 END TEST xnvme_bdevperf 00:18:56.573 ************************************ 00:18:56.832 21:50:04 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:18:56.832 21:50:04 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:56.832 21:50:04 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:56.832 21:50:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:56.832 ************************************ 00:18:56.832 START TEST xnvme_fio_plugin 00:18:56.832 ************************************ 00:18:56.832 21:50:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:18:56.832 21:50:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:18:56.832 21:50:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:18:56.832 21:50:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:56.832 21:50:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:56.832 21:50:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:56.832 21:50:04 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:56.832 21:50:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:56.832 21:50:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:56.832 21:50:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:56.832 21:50:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:56.832 21:50:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:56.832 21:50:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:56.832 21:50:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:56.832 21:50:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:56.832 21:50:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:56.832 21:50:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:56.833 21:50:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:56.833 21:50:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:56.833 { 00:18:56.833 "subsystems": [ 00:18:56.833 { 00:18:56.833 "subsystem": "bdev", 00:18:56.833 "config": [ 00:18:56.833 { 00:18:56.833 "params": { 00:18:56.833 "io_mechanism": "io_uring_cmd", 00:18:56.833 "conserve_cpu": true, 00:18:56.833 "filename": "/dev/ng0n1", 00:18:56.833 "name": "xnvme_bdev" 00:18:56.833 }, 00:18:56.833 "method": "bdev_xnvme_create" 00:18:56.833 }, 00:18:56.833 { 00:18:56.833 "method": "bdev_wait_for_examine" 00:18:56.833 } 00:18:56.833 ] 00:18:56.833 } 00:18:56.833 ] 00:18:56.833 } 00:18:56.833 21:50:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:56.833 21:50:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:56.833 21:50:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:56.833 21:50:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:56.833 21:50:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:57.091 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:57.091 fio-3.35 00:18:57.091 Starting 1 thread 00:19:03.718 00:19:03.718 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=74745: Tue Dec 10 21:50:10 2024 00:19:03.718 read: IOPS=28.6k, BW=112MiB/s (117MB/s)(559MiB/5002msec) 00:19:03.718 slat (nsec): min=3534, max=85511, avg=6491.58, stdev=2466.53 00:19:03.718 clat (usec): min=1282, max=3110, avg=1979.23, stdev=301.43 00:19:03.718 lat (usec): min=1287, max=3150, avg=1985.73, stdev=302.63 00:19:03.718 clat percentiles (usec): 00:19:03.718 | 1.00th=[ 1467], 5.00th=[ 1582], 10.00th=[ 1631], 20.00th=[ 1713], 00:19:03.718 | 30.00th=[ 1778], 40.00th=[ 1860], 50.00th=[ 1926], 60.00th=[ 1991], 00:19:03.718 | 70.00th=[ 2114], 80.00th=[ 2245], 90.00th=[ 2442], 95.00th=[ 2573], 00:19:03.718 | 99.00th=[ 2704], 99.50th=[ 2769], 99.90th=[ 2835], 99.95th=[ 2868], 00:19:03.718 | 99.99th=[ 2999] 00:19:03.718 bw ( KiB/s): min=94208, max=130048, per=99.03%, avg=113379.56, stdev=12447.26, samples=9 00:19:03.718 iops : min=23552, max=32512, avg=28344.89, stdev=3111.82, samples=9 00:19:03.718 lat (msec) : 2=60.44%, 4=39.56% 00:19:03.718 cpu : usr=49.83%, sys=47.31%, ctx=15, majf=0, minf=762 00:19:03.718 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:03.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.718 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:19:03.718 issued rwts: total=143166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.718 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:03.718 00:19:03.718 Run status group 0 (all jobs): 00:19:03.718 READ: bw=112MiB/s (117MB/s), 112MiB/s-112MiB/s (117MB/s-117MB/s), io=559MiB (586MB), run=5002-5002msec 00:19:04.287 ----------------------------------------------------- 00:19:04.287 Suppressions used: 00:19:04.287 count bytes template 00:19:04.287 1 11 /usr/src/fio/parse.c 00:19:04.287 1 8 libtcmalloc_minimal.so 00:19:04.287 1 904 libcrypto.so 00:19:04.287 ----------------------------------------------------- 00:19:04.287 00:19:04.287 21:50:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:04.287 21:50:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:04.287 21:50:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:04.287 21:50:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:04.287 21:50:11 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:04.287 21:50:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:04.287 21:50:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:04.287 21:50:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:04.287 21:50:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:04.287 21:50:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:04.287 21:50:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:04.287 21:50:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:04.287 21:50:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:04.287 21:50:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:04.287 21:50:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:04.287 21:50:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:04.287 21:50:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:04.287 21:50:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:04.287 21:50:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:04.287 21:50:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:04.287 21:50:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:04.287 { 00:19:04.287 "subsystems": [ 00:19:04.287 { 00:19:04.287 "subsystem": "bdev", 00:19:04.287 "config": [ 00:19:04.287 { 00:19:04.287 "params": { 00:19:04.287 "io_mechanism": "io_uring_cmd", 00:19:04.287 "conserve_cpu": true, 00:19:04.287 "filename": "/dev/ng0n1", 00:19:04.287 "name": "xnvme_bdev" 00:19:04.287 }, 00:19:04.287 "method": "bdev_xnvme_create" 00:19:04.287 }, 00:19:04.287 { 00:19:04.287 "method": "bdev_wait_for_examine" 00:19:04.287 } 00:19:04.287 ] 00:19:04.287 } 00:19:04.287 ] 00:19:04.287 } 00:19:04.546 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:04.546 fio-3.35 00:19:04.546 Starting 1 thread 00:19:11.113 00:19:11.113 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=74841: Tue Dec 10 21:50:17 2024 00:19:11.113 write: IOPS=29.7k, BW=116MiB/s (122MB/s)(580MiB/5001msec); 0 zone resets 00:19:11.113 slat (usec): min=2, max=1523, avg= 6.37, stdev= 4.67 00:19:11.113 clat (usec): min=900, max=4576, avg=1902.28, stdev=325.31 00:19:11.113 lat (usec): min=903, max=4590, avg=1908.65, stdev=326.60 00:19:11.113 clat percentiles (usec): 00:19:11.113 | 1.00th=[ 1123], 5.00th=[ 1369], 10.00th=[ 1532], 20.00th=[ 1663], 00:19:11.113 | 30.00th=[ 1729], 40.00th=[ 1811], 50.00th=[ 1876], 60.00th=[ 1958], 00:19:11.113 | 70.00th=[ 2040], 80.00th=[ 2147], 90.00th=[ 2343], 95.00th=[ 2474], 00:19:11.113 | 99.00th=[ 2704], 99.50th=[ 2769], 99.90th=[ 2966], 99.95th=[ 3130], 00:19:11.113 | 99.99th=[ 4424] 00:19:11.113 bw ( KiB/s): min=104960, max=141816, per=100.00%, avg=119669.00, stdev=12238.95, samples=9 00:19:11.113 iops : min=26240, max=35454, avg=29917.22, stdev=3059.76, samples=9 00:19:11.113 lat (usec) : 1000=0.19% 00:19:11.113 lat (msec) : 2=65.16%, 4=34.63%, 10=0.02% 00:19:11.113 cpu : usr=51.10%, sys=45.60%, ctx=13, majf=0, minf=763 00:19:11.113 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:11.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.113 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:19:11.113 issued rwts: total=0,148415,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:11.113 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:11.113 00:19:11.113 Run status group 0 (all jobs): 00:19:11.113 WRITE: bw=116MiB/s (122MB/s), 116MiB/s-116MiB/s (122MB/s-122MB/s), io=580MiB (608MB), run=5001-5001msec 00:19:11.679 ----------------------------------------------------- 00:19:11.679 Suppressions used: 00:19:11.679 count bytes template 00:19:11.679 1 11 /usr/src/fio/parse.c 00:19:11.680 1 8 libtcmalloc_minimal.so 00:19:11.680 1 904 libcrypto.so 00:19:11.680 ----------------------------------------------------- 00:19:11.680 00:19:11.680 00:19:11.680 real 0m14.770s 00:19:11.680 user 0m8.783s 00:19:11.680 sys 0m5.427s 00:19:11.680 21:50:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:11.680 21:50:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:11.680 ************************************ 00:19:11.680 END TEST xnvme_fio_plugin 00:19:11.680 ************************************ 00:19:11.680 Process with pid 74310 is not found 00:19:11.680 21:50:19 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 74310 00:19:11.680 21:50:19 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 74310 ']' 00:19:11.680 21:50:19 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 74310 00:19:11.680 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (74310) - No such process 00:19:11.680 21:50:19 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 74310 is not found' 00:19:11.680 00:19:11.680 real 3m53.117s 00:19:11.680 user 2m4.865s 00:19:11.680 sys 1m30.754s 00:19:11.680 21:50:19 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:11.680 ************************************ 00:19:11.680 END TEST nvme_xnvme 00:19:11.680 ************************************ 00:19:11.680 21:50:19 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:11.680 21:50:19 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:19:11.680 21:50:19 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:11.680 21:50:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:11.680 21:50:19 -- common/autotest_common.sh@10 -- # set +x 00:19:11.680 ************************************ 00:19:11.680 START TEST blockdev_xnvme 00:19:11.680 ************************************ 00:19:11.680 21:50:19 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:19:11.680 * Looking for test storage... 00:19:11.680 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:19:11.680 21:50:19 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:11.680 21:50:19 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:19:11.680 21:50:19 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:11.939 21:50:19 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:11.939 21:50:19 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:11.939 21:50:19 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:11.939 21:50:19 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:11.939 21:50:19 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:19:11.939 21:50:19 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:19:11.939 21:50:19 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:19:11.939 21:50:19 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:19:11.939 21:50:19 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:19:11.939 21:50:19 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:19:11.939 21:50:19 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:19:11.939 21:50:19 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:11.939 21:50:19 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:19:11.939 21:50:19 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:19:11.939 21:50:19 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:11.939 21:50:19 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:11.939 21:50:19 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:19:11.939 21:50:19 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:19:11.939 21:50:19 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:11.939 21:50:19 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:19:11.939 21:50:19 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:19:11.939 21:50:19 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:19:11.939 21:50:19 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:19:11.939 21:50:19 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:11.939 21:50:19 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:19:11.939 21:50:19 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:19:11.939 21:50:19 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:11.939 21:50:19 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:11.939 21:50:19 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:19:11.939 21:50:19 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:11.939 21:50:19 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:11.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.939 --rc genhtml_branch_coverage=1 00:19:11.939 --rc genhtml_function_coverage=1 00:19:11.939 --rc genhtml_legend=1 00:19:11.939 --rc geninfo_all_blocks=1 00:19:11.939 --rc geninfo_unexecuted_blocks=1 00:19:11.939 00:19:11.939 ' 00:19:11.939 21:50:19 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:11.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.939 --rc genhtml_branch_coverage=1 00:19:11.939 --rc genhtml_function_coverage=1 00:19:11.939 --rc genhtml_legend=1 00:19:11.939 --rc geninfo_all_blocks=1 00:19:11.939 --rc geninfo_unexecuted_blocks=1 00:19:11.939 00:19:11.939 ' 00:19:11.939 21:50:19 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:11.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.939 --rc genhtml_branch_coverage=1 00:19:11.939 --rc genhtml_function_coverage=1 00:19:11.939 --rc genhtml_legend=1 00:19:11.939 --rc geninfo_all_blocks=1 00:19:11.939 --rc geninfo_unexecuted_blocks=1 00:19:11.939 00:19:11.939 ' 00:19:11.939 21:50:19 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:11.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.939 --rc genhtml_branch_coverage=1 00:19:11.939 --rc genhtml_function_coverage=1 00:19:11.939 --rc genhtml_legend=1 00:19:11.939 --rc geninfo_all_blocks=1 00:19:11.939 --rc geninfo_unexecuted_blocks=1 00:19:11.939 00:19:11.939 ' 00:19:11.939 21:50:19 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:11.939 21:50:19 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:19:11.939 21:50:19 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:19:11.939 21:50:19 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:11.939 21:50:19 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:19:11.939 21:50:19 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:19:11.939 21:50:19 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:19:11.939 21:50:19 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:19:11.939 21:50:19 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:19:11.939 21:50:19 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:19:11.939 21:50:19 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:19:11.939 21:50:19 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:19:11.939 21:50:19 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:19:11.939 21:50:19 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:19:11.939 21:50:19 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:19:11.939 21:50:19 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:19:11.939 21:50:19 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:19:11.939 21:50:19 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:19:11.939 21:50:19 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:19:11.939 21:50:19 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:19:11.939 21:50:19 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:19:11.939 21:50:19 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:19:11.939 21:50:19 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:19:11.939 21:50:19 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:19:11.939 21:50:19 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74980 00:19:11.939 21:50:19 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:11.939 21:50:19 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:19:11.939 21:50:19 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 74980 00:19:11.939 21:50:19 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 74980 ']' 00:19:11.939 21:50:19 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.939 21:50:19 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:11.939 21:50:19 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.939 21:50:19 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:11.939 21:50:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:11.939 [2024-12-10 21:50:19.628401] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:19:11.940 [2024-12-10 21:50:19.628714] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74980 ] 00:19:12.199 [2024-12-10 21:50:19.812127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.457 [2024-12-10 21:50:19.936437] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.394 21:50:20 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:13.394 21:50:20 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:19:13.394 21:50:20 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:19:13.394 21:50:20 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:19:13.394 21:50:20 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:19:13.394 21:50:20 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:19:13.394 21:50:20 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:13.962 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:14.530 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:19:14.530 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:19:14.530 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:19:14.530 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:19:14.789 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0c0n1 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0c0n1 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:19:14.789 21:50:22 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:14.789 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:14.789 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:19:14.789 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:14.789 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:14.789 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:14.789 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:19:14.789 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:14.790 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:14.790 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:14.790 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n2 ]] 00:19:14.790 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:14.790 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:14.790 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:14.790 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n3 ]] 00:19:14.790 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:14.790 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:14.790 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:14.790 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:19:14.790 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:14.790 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:14.790 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:14.790 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:19:14.790 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:14.790 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:14.790 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:19:14.790 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:19:14.790 21:50:22 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.790 21:50:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:14.790 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme1n2 nvme1n2 io_uring -c' 'bdev_xnvme_create /dev/nvme1n3 nvme1n3 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:19:14.790 nvme0n1 00:19:14.790 nvme1n1 00:19:14.790 nvme1n2 00:19:14.790 nvme1n3 00:19:14.790 nvme2n1 00:19:14.790 nvme3n1 00:19:14.790 21:50:22 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.790 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:19:14.790 21:50:22 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.790 21:50:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:14.790 21:50:22 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.790 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:19:14.790 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:19:14.790 21:50:22 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.790 21:50:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:14.790 21:50:22 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.790 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:19:14.790 21:50:22 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.790 21:50:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:14.790 21:50:22 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.790 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:14.790 21:50:22 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.790 21:50:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:14.790 21:50:22 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.790 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:19:14.790 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:19:14.790 21:50:22 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.790 21:50:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:14.790 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:19:15.049 21:50:22 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.049 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:19:15.049 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:19:15.050 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "d66bc3b1-25f9-4452-ad1f-fcc18ae3349a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d66bc3b1-25f9-4452-ad1f-fcc18ae3349a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "eac453c8-13c8-417a-ac66-58778f107da1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "eac453c8-13c8-417a-ac66-58778f107da1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "7ceb1fc2-f5d2-4e2c-82aa-f51b2a0738f5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7ceb1fc2-f5d2-4e2c-82aa-f51b2a0738f5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "42e624a1-4d06-440b-abf9-cb1b15a56bfb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "42e624a1-4d06-440b-abf9-cb1b15a56bfb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "c34553c9-04b6-4f70-ba18-e1cdb8bd608e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "c34553c9-04b6-4f70-ba18-e1cdb8bd608e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "0f930404-eb09-46f1-8e7a-387c776b563e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "0f930404-eb09-46f1-8e7a-387c776b563e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:19:15.050 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:19:15.050 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:19:15.050 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:19:15.050 21:50:22 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 74980 00:19:15.050 21:50:22 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 74980 ']' 00:19:15.050 21:50:22 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 74980 00:19:15.050 21:50:22 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:19:15.050 21:50:22 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:15.050 21:50:22 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74980 00:19:15.050 killing process with pid 74980 00:19:15.050 21:50:22 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:15.050 21:50:22 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:15.050 21:50:22 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74980' 00:19:15.050 21:50:22 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 74980 00:19:15.050 21:50:22 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 74980 00:19:17.585 21:50:24 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:17.585 21:50:24 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:19:17.585 21:50:24 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:17.585 21:50:24 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:17.585 21:50:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:17.585 ************************************ 00:19:17.585 START TEST bdev_hello_world 00:19:17.585 ************************************ 00:19:17.585 21:50:24 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:19:17.585 [2024-12-10 21:50:25.086656] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:19:17.585 [2024-12-10 21:50:25.086784] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75271 ] 00:19:17.585 [2024-12-10 21:50:25.270224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.844 [2024-12-10 21:50:25.403751] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.412 [2024-12-10 21:50:25.848126] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:18.412 [2024-12-10 21:50:25.848180] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:19:18.412 [2024-12-10 21:50:25.848197] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:18.412 [2024-12-10 21:50:25.850428] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:18.412 [2024-12-10 21:50:25.850837] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:18.412 [2024-12-10 21:50:25.850862] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:18.412 [2024-12-10 21:50:25.851177] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:18.412 00:19:18.412 [2024-12-10 21:50:25.851199] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:19.349 00:19:19.349 real 0m1.980s 00:19:19.349 user 0m1.597s 00:19:19.349 sys 0m0.265s 00:19:19.349 21:50:26 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:19.349 21:50:26 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:19.349 ************************************ 00:19:19.349 END TEST bdev_hello_world 00:19:19.349 ************************************ 00:19:19.349 21:50:27 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:19:19.349 21:50:27 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:19.349 21:50:27 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:19.349 21:50:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:19.349 ************************************ 00:19:19.349 START TEST bdev_bounds 00:19:19.349 ************************************ 00:19:19.349 Process bdevio pid: 75316 00:19:19.349 21:50:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:19:19.349 21:50:27 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=75316 00:19:19.349 21:50:27 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:19.349 21:50:27 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:19.349 21:50:27 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 75316' 00:19:19.349 21:50:27 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 75316 00:19:19.349 21:50:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 75316 ']' 00:19:19.349 21:50:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.349 21:50:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:19.349 21:50:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.349 21:50:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:19.349 21:50:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:19.608 [2024-12-10 21:50:27.147311] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:19:19.608 [2024-12-10 21:50:27.147683] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75316 ] 00:19:19.608 [2024-12-10 21:50:27.328202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:19.866 [2024-12-10 21:50:27.460648] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:19.866 [2024-12-10 21:50:27.460801] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.866 [2024-12-10 21:50:27.460832] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:20.435 21:50:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:20.435 21:50:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:19:20.435 21:50:27 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:20.435 I/O targets: 00:19:20.435 nvme0n1: 262144 blocks of 4096 bytes (1024 MiB) 00:19:20.435 nvme1n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:20.435 nvme1n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:20.435 nvme1n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:20.435 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:19:20.435 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:19:20.435 00:19:20.435 00:19:20.435 CUnit - A unit testing framework for C - Version 2.1-3 00:19:20.435 http://cunit.sourceforge.net/ 00:19:20.435 00:19:20.435 00:19:20.435 Suite: bdevio tests on: nvme3n1 00:19:20.435 Test: blockdev write read block ...passed 00:19:20.435 Test: blockdev write zeroes read block ...passed 00:19:20.435 Test: blockdev write zeroes read no split ...passed 00:19:20.435 Test: blockdev write zeroes read split ...passed 00:19:20.435 Test: blockdev write zeroes read split partial ...passed 00:19:20.435 Test: blockdev reset ...passed 00:19:20.435 Test: blockdev write read 8 blocks ...passed 00:19:20.435 Test: blockdev write read size > 128k ...passed 00:19:20.435 Test: blockdev write read invalid size ...passed 00:19:20.435 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:20.435 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:20.435 Test: blockdev write read max offset ...passed 00:19:20.435 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:20.435 Test: blockdev writev readv 8 blocks ...passed 00:19:20.435 Test: blockdev writev readv 30 x 1block ...passed 00:19:20.435 Test: blockdev writev readv block ...passed 00:19:20.435 Test: blockdev writev readv size > 128k ...passed 00:19:20.435 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:20.435 Test: blockdev comparev and writev ...passed 00:19:20.435 Test: blockdev nvme passthru rw ...passed 00:19:20.435 Test: blockdev nvme passthru vendor specific ...passed 00:19:20.435 Test: blockdev nvme admin passthru ...passed 00:19:20.435 Test: blockdev copy ...passed 00:19:20.435 Suite: bdevio tests on: nvme2n1 00:19:20.435 Test: blockdev write read block ...passed 00:19:20.435 Test: blockdev write zeroes read block ...passed 00:19:20.693 Test: blockdev write zeroes read no split ...passed 00:19:20.693 Test: blockdev write zeroes read split ...passed 00:19:20.693 Test: blockdev write zeroes read split partial ...passed 00:19:20.693 Test: blockdev reset ...passed 00:19:20.693 Test: blockdev write read 8 blocks ...passed 00:19:20.693 Test: blockdev write read size > 128k ...passed 00:19:20.693 Test: blockdev write read invalid size ...passed 00:19:20.693 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:20.693 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:20.693 Test: blockdev write read max offset ...passed 00:19:20.693 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:20.693 Test: blockdev writev readv 8 blocks ...passed 00:19:20.693 Test: blockdev writev readv 30 x 1block ...passed 00:19:20.693 Test: blockdev writev readv block ...passed 00:19:20.693 Test: blockdev writev readv size > 128k ...passed 00:19:20.693 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:20.693 Test: blockdev comparev and writev ...passed 00:19:20.693 Test: blockdev nvme passthru rw ...passed 00:19:20.693 Test: blockdev nvme passthru vendor specific ...passed 00:19:20.693 Test: blockdev nvme admin passthru ...passed 00:19:20.693 Test: blockdev copy ...passed 00:19:20.693 Suite: bdevio tests on: nvme1n3 00:19:20.693 Test: blockdev write read block ...passed 00:19:20.693 Test: blockdev write zeroes read block ...passed 00:19:20.693 Test: blockdev write zeroes read no split ...passed 00:19:20.693 Test: blockdev write zeroes read split ...passed 00:19:20.693 Test: blockdev write zeroes read split partial ...passed 00:19:20.693 Test: blockdev reset ...passed 00:19:20.693 Test: blockdev write read 8 blocks ...passed 00:19:20.693 Test: blockdev write read size > 128k ...passed 00:19:20.693 Test: blockdev write read invalid size ...passed 00:19:20.693 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:20.693 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:20.693 Test: blockdev write read max offset ...passed 00:19:20.693 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:20.693 Test: blockdev writev readv 8 blocks ...passed 00:19:20.693 Test: blockdev writev readv 30 x 1block ...passed 00:19:20.693 Test: blockdev writev readv block ...passed 00:19:20.693 Test: blockdev writev readv size > 128k ...passed 00:19:20.693 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:20.693 Test: blockdev comparev and writev ...passed 00:19:20.693 Test: blockdev nvme passthru rw ...passed 00:19:20.693 Test: blockdev nvme passthru vendor specific ...passed 00:19:20.693 Test: blockdev nvme admin passthru ...passed 00:19:20.693 Test: blockdev copy ...passed 00:19:20.693 Suite: bdevio tests on: nvme1n2 00:19:20.693 Test: blockdev write read block ...passed 00:19:20.693 Test: blockdev write zeroes read block ...passed 00:19:20.693 Test: blockdev write zeroes read no split ...passed 00:19:20.693 Test: blockdev write zeroes read split ...passed 00:19:20.693 Test: blockdev write zeroes read split partial ...passed 00:19:20.693 Test: blockdev reset ...passed 00:19:20.693 Test: blockdev write read 8 blocks ...passed 00:19:20.693 Test: blockdev write read size > 128k ...passed 00:19:20.693 Test: blockdev write read invalid size ...passed 00:19:20.693 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:20.693 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:20.693 Test: blockdev write read max offset ...passed 00:19:20.693 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:20.693 Test: blockdev writev readv 8 blocks ...passed 00:19:20.693 Test: blockdev writev readv 30 x 1block ...passed 00:19:20.693 Test: blockdev writev readv block ...passed 00:19:20.693 Test: blockdev writev readv size > 128k ...passed 00:19:20.693 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:20.694 Test: blockdev comparev and writev ...passed 00:19:20.694 Test: blockdev nvme passthru rw ...passed 00:19:20.694 Test: blockdev nvme passthru vendor specific ...passed 00:19:20.694 Test: blockdev nvme admin passthru ...passed 00:19:20.694 Test: blockdev copy ...passed 00:19:20.694 Suite: bdevio tests on: nvme1n1 00:19:20.694 Test: blockdev write read block ...passed 00:19:20.694 Test: blockdev write zeroes read block ...passed 00:19:20.694 Test: blockdev write zeroes read no split ...passed 00:19:20.953 Test: blockdev write zeroes read split ...passed 00:19:20.953 Test: blockdev write zeroes read split partial ...passed 00:19:20.953 Test: blockdev reset ...passed 00:19:20.953 Test: blockdev write read 8 blocks ...passed 00:19:20.953 Test: blockdev write read size > 128k ...passed 00:19:20.953 Test: blockdev write read invalid size ...passed 00:19:20.953 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:20.953 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:20.953 Test: blockdev write read max offset ...passed 00:19:20.953 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:20.953 Test: blockdev writev readv 8 blocks ...passed 00:19:20.953 Test: blockdev writev readv 30 x 1block ...passed 00:19:20.953 Test: blockdev writev readv block ...passed 00:19:20.953 Test: blockdev writev readv size > 128k ...passed 00:19:20.953 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:20.953 Test: blockdev comparev and writev ...passed 00:19:20.953 Test: blockdev nvme passthru rw ...passed 00:19:20.953 Test: blockdev nvme passthru vendor specific ...passed 00:19:20.953 Test: blockdev nvme admin passthru ...passed 00:19:20.953 Test: blockdev copy ...passed 00:19:20.953 Suite: bdevio tests on: nvme0n1 00:19:20.953 Test: blockdev write read block ...passed 00:19:20.953 Test: blockdev write zeroes read block ...passed 00:19:20.953 Test: blockdev write zeroes read no split ...passed 00:19:20.953 Test: blockdev write zeroes read split ...passed 00:19:20.953 Test: blockdev write zeroes read split partial ...passed 00:19:20.953 Test: blockdev reset ...passed 00:19:20.953 Test: blockdev write read 8 blocks ...passed 00:19:20.953 Test: blockdev write read size > 128k ...passed 00:19:20.953 Test: blockdev write read invalid size ...passed 00:19:20.953 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:20.953 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:20.953 Test: blockdev write read max offset ...passed 00:19:20.953 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:20.953 Test: blockdev writev readv 8 blocks ...passed 00:19:20.953 Test: blockdev writev readv 30 x 1block ...passed 00:19:20.953 Test: blockdev writev readv block ...passed 00:19:20.953 Test: blockdev writev readv size > 128k ...passed 00:19:20.953 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:20.953 Test: blockdev comparev and writev ...passed 00:19:20.953 Test: blockdev nvme passthru rw ...passed 00:19:20.953 Test: blockdev nvme passthru vendor specific ...passed 00:19:20.953 Test: blockdev nvme admin passthru ...passed 00:19:20.953 Test: blockdev copy ...passed 00:19:20.953 00:19:20.953 Run Summary: Type Total Ran Passed Failed Inactive 00:19:20.953 suites 6 6 n/a 0 0 00:19:20.953 tests 138 138 138 0 0 00:19:20.953 asserts 780 780 780 0 n/a 00:19:20.953 00:19:20.953 Elapsed time = 1.304 seconds 00:19:20.953 0 00:19:20.953 21:50:28 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 75316 00:19:20.953 21:50:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 75316 ']' 00:19:20.953 21:50:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 75316 00:19:20.953 21:50:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:19:20.953 21:50:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:20.953 21:50:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75316 00:19:20.953 killing process with pid 75316 00:19:20.953 21:50:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:20.953 21:50:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:20.953 21:50:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75316' 00:19:20.953 21:50:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 75316 00:19:20.953 21:50:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 75316 00:19:22.331 ************************************ 00:19:22.331 END TEST bdev_bounds 00:19:22.331 ************************************ 00:19:22.331 21:50:29 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:22.331 00:19:22.331 real 0m2.737s 00:19:22.331 user 0m6.684s 00:19:22.331 sys 0m0.457s 00:19:22.331 21:50:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:22.331 21:50:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:22.331 21:50:29 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:19:22.331 21:50:29 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:22.331 21:50:29 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:22.331 21:50:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:22.331 ************************************ 00:19:22.331 START TEST bdev_nbd 00:19:22.331 ************************************ 00:19:22.331 21:50:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:19:22.331 21:50:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:22.331 21:50:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:22.331 21:50:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:22.331 21:50:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:22.331 21:50:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:19:22.331 21:50:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:22.331 21:50:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:19:22.331 21:50:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:22.331 21:50:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:22.331 21:50:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:22.331 21:50:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:19:22.331 21:50:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:22.331 21:50:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:22.331 21:50:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:19:22.331 21:50:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:22.331 21:50:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=75380 00:19:22.331 21:50:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:22.331 21:50:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:22.331 21:50:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 75380 /var/tmp/spdk-nbd.sock 00:19:22.331 21:50:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 75380 ']' 00:19:22.331 21:50:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:22.331 21:50:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:22.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:22.331 21:50:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:22.331 21:50:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:22.331 21:50:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:22.331 [2024-12-10 21:50:29.969418] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:19:22.332 [2024-12-10 21:50:29.969552] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:22.591 [2024-12-10 21:50:30.146429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.591 [2024-12-10 21:50:30.277105] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.159 21:50:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:23.159 21:50:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:19:23.159 21:50:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:19:23.159 21:50:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:23.159 21:50:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:19:23.159 21:50:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:23.159 21:50:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:19:23.159 21:50:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:23.159 21:50:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:19:23.159 21:50:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:23.159 21:50:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:23.159 21:50:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:23.159 21:50:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:23.159 21:50:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:23.159 21:50:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:19:23.418 21:50:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:23.418 21:50:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:23.418 21:50:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:23.418 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:23.418 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:23.418 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:23.418 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:23.418 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:23.418 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:23.418 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:23.418 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:23.418 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:23.418 1+0 records in 00:19:23.418 1+0 records out 00:19:23.418 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000596435 s, 6.9 MB/s 00:19:23.418 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:23.418 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:23.418 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:23.418 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:23.418 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:23.418 21:50:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:23.418 21:50:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:23.418 21:50:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:19:23.677 21:50:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:19:23.677 21:50:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:19:23.677 21:50:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:19:23.677 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:23.677 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:23.677 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:23.677 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:23.677 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:23.677 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:23.677 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:23.677 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:23.677 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:23.677 1+0 records in 00:19:23.677 1+0 records out 00:19:23.677 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000791016 s, 5.2 MB/s 00:19:23.677 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:23.677 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:23.677 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:23.677 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:23.677 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:23.677 21:50:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:23.677 21:50:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:23.677 21:50:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 00:19:23.936 21:50:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:19:23.936 21:50:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:19:23.936 21:50:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:19:23.936 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:19:23.936 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:23.936 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:23.936 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:23.936 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:19:23.936 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:23.936 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:23.936 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:23.936 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:23.936 1+0 records in 00:19:23.936 1+0 records out 00:19:23.936 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000755261 s, 5.4 MB/s 00:19:23.936 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:23.936 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:23.936 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:23.936 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:23.936 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:23.936 21:50:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:23.936 21:50:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:23.936 21:50:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 00:19:24.195 21:50:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:19:24.195 21:50:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:19:24.195 21:50:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:19:24.195 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:19:24.195 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:24.195 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:24.195 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:24.195 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:19:24.195 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:24.195 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:24.195 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:24.195 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:24.195 1+0 records in 00:19:24.195 1+0 records out 00:19:24.195 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000780027 s, 5.3 MB/s 00:19:24.195 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:24.195 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:24.195 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:24.195 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:24.195 21:50:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:24.195 21:50:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:24.195 21:50:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:24.195 21:50:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:19:24.454 21:50:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:19:24.454 21:50:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:19:24.454 21:50:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:19:24.454 21:50:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:19:24.454 21:50:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:24.454 21:50:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:24.454 21:50:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:24.454 21:50:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:19:24.454 21:50:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:24.454 21:50:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:24.454 21:50:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:24.454 21:50:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:24.454 1+0 records in 00:19:24.454 1+0 records out 00:19:24.454 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00081752 s, 5.0 MB/s 00:19:24.454 21:50:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:24.454 21:50:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:24.454 21:50:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:24.454 21:50:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:24.454 21:50:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:24.454 21:50:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:24.454 21:50:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:24.454 21:50:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:19:24.713 21:50:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:19:24.713 21:50:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:19:24.713 21:50:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:19:24.713 21:50:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:19:24.713 21:50:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:24.713 21:50:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:24.713 21:50:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:24.713 21:50:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:19:24.713 21:50:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:24.713 21:50:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:24.713 21:50:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:24.713 21:50:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:24.713 1+0 records in 00:19:24.713 1+0 records out 00:19:24.713 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00058488 s, 7.0 MB/s 00:19:24.713 21:50:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:24.713 21:50:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:24.713 21:50:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:24.713 21:50:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:24.713 21:50:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:24.713 21:50:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:24.713 21:50:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:24.713 21:50:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:24.972 21:50:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:24.972 { 00:19:24.972 "nbd_device": "/dev/nbd0", 00:19:24.972 "bdev_name": "nvme0n1" 00:19:24.972 }, 00:19:24.972 { 00:19:24.972 "nbd_device": "/dev/nbd1", 00:19:24.972 "bdev_name": "nvme1n1" 00:19:24.972 }, 00:19:24.972 { 00:19:24.972 "nbd_device": "/dev/nbd2", 00:19:24.972 "bdev_name": "nvme1n2" 00:19:24.972 }, 00:19:24.972 { 00:19:24.972 "nbd_device": "/dev/nbd3", 00:19:24.972 "bdev_name": "nvme1n3" 00:19:24.972 }, 00:19:24.972 { 00:19:24.972 "nbd_device": "/dev/nbd4", 00:19:24.972 "bdev_name": "nvme2n1" 00:19:24.972 }, 00:19:24.972 { 00:19:24.972 "nbd_device": "/dev/nbd5", 00:19:24.972 "bdev_name": "nvme3n1" 00:19:24.972 } 00:19:24.972 ]' 00:19:24.972 21:50:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:24.972 21:50:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:24.972 21:50:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:24.972 { 00:19:24.972 "nbd_device": "/dev/nbd0", 00:19:24.972 "bdev_name": "nvme0n1" 00:19:24.972 }, 00:19:24.972 { 00:19:24.972 "nbd_device": "/dev/nbd1", 00:19:24.972 "bdev_name": "nvme1n1" 00:19:24.972 }, 00:19:24.972 { 00:19:24.972 "nbd_device": "/dev/nbd2", 00:19:24.972 "bdev_name": "nvme1n2" 00:19:24.972 }, 00:19:24.972 { 00:19:24.972 "nbd_device": "/dev/nbd3", 00:19:24.972 "bdev_name": "nvme1n3" 00:19:24.972 }, 00:19:24.972 { 00:19:24.972 "nbd_device": "/dev/nbd4", 00:19:24.972 "bdev_name": "nvme2n1" 00:19:24.972 }, 00:19:24.972 { 00:19:24.972 "nbd_device": "/dev/nbd5", 00:19:24.972 "bdev_name": "nvme3n1" 00:19:24.972 } 00:19:24.972 ]' 00:19:24.972 21:50:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:19:24.972 21:50:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:24.972 21:50:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:19:24.972 21:50:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:24.972 21:50:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:24.972 21:50:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:24.972 21:50:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:25.231 21:50:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:25.231 21:50:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:25.231 21:50:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:25.231 21:50:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:25.231 21:50:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:25.231 21:50:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:25.231 21:50:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:25.231 21:50:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:25.231 21:50:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:25.231 21:50:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:19:25.490 21:50:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:25.490 21:50:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:25.490 21:50:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:25.490 21:50:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:25.490 21:50:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:25.490 21:50:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:25.490 21:50:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:25.490 21:50:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:25.490 21:50:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:25.490 21:50:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:19:25.748 21:50:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:19:25.748 21:50:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:19:25.748 21:50:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:19:25.748 21:50:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:25.748 21:50:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:25.748 21:50:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:19:25.748 21:50:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:25.748 21:50:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:25.748 21:50:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:25.748 21:50:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:19:26.006 21:50:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:19:26.006 21:50:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:19:26.006 21:50:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:19:26.006 21:50:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:26.006 21:50:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:26.006 21:50:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:19:26.006 21:50:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:26.006 21:50:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:26.006 21:50:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:26.006 21:50:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:19:26.265 21:50:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:19:26.265 21:50:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:19:26.265 21:50:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:19:26.265 21:50:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:26.265 21:50:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:26.265 21:50:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:19:26.265 21:50:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:26.265 21:50:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:26.265 21:50:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:26.265 21:50:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:19:26.523 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:19:26.523 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:19:26.523 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:19:26.523 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:26.523 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:26.523 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:19:26.523 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:26.523 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:26.523 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:26.523 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:26.523 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:26.782 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:26.782 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:26.782 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:26.782 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:26.782 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:26.782 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:26.782 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:26.782 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:26.782 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:26.782 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:26.782 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:26.782 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:26.782 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:26.782 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:26.782 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:19:26.782 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:26.782 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:26.782 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:26.782 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:26.782 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:26.782 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:19:26.782 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:26.782 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:26.782 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:26.782 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:26.782 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:26.782 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:26.782 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:19:27.041 /dev/nbd0 00:19:27.041 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:27.041 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:27.041 21:50:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:27.041 21:50:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:27.041 21:50:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:27.041 21:50:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:27.041 21:50:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:27.041 21:50:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:27.041 21:50:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:27.041 21:50:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:27.041 21:50:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:27.041 1+0 records in 00:19:27.041 1+0 records out 00:19:27.041 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000633436 s, 6.5 MB/s 00:19:27.041 21:50:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:27.041 21:50:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:27.041 21:50:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:27.041 21:50:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:27.041 21:50:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:27.041 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:27.041 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:27.041 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:19:27.300 /dev/nbd1 00:19:27.300 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:27.300 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:27.300 21:50:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:27.300 21:50:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:27.300 21:50:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:27.300 21:50:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:27.300 21:50:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:27.300 21:50:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:27.300 21:50:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:27.300 21:50:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:27.300 21:50:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:27.300 1+0 records in 00:19:27.300 1+0 records out 00:19:27.300 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0006039 s, 6.8 MB/s 00:19:27.300 21:50:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:27.300 21:50:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:27.300 21:50:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:27.300 21:50:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:27.300 21:50:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:27.300 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:27.300 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:27.300 21:50:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 /dev/nbd10 00:19:27.565 /dev/nbd10 00:19:27.565 21:50:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:19:27.565 21:50:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:19:27.565 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:19:27.565 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:27.565 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:27.565 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:27.565 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:19:27.565 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:27.565 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:27.565 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:27.565 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:27.565 1+0 records in 00:19:27.565 1+0 records out 00:19:27.565 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000811514 s, 5.0 MB/s 00:19:27.565 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:27.565 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:27.565 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:27.565 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:27.565 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:27.565 21:50:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:27.565 21:50:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:27.565 21:50:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 /dev/nbd11 00:19:27.824 /dev/nbd11 00:19:27.824 21:50:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:19:27.824 21:50:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:19:27.824 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:19:27.824 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:27.824 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:27.824 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:27.824 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:19:27.824 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:27.824 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:27.824 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:27.824 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:27.824 1+0 records in 00:19:27.824 1+0 records out 00:19:27.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000698201 s, 5.9 MB/s 00:19:27.824 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:27.824 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:27.824 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:27.824 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:27.824 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:27.824 21:50:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:27.824 21:50:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:27.824 21:50:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:19:28.083 /dev/nbd12 00:19:28.083 21:50:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:19:28.083 21:50:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:19:28.083 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:19:28.083 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:28.083 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:28.083 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:28.083 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:19:28.083 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:28.083 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:28.083 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:28.083 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:28.083 1+0 records in 00:19:28.083 1+0 records out 00:19:28.083 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000678491 s, 6.0 MB/s 00:19:28.083 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:28.083 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:28.083 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:28.083 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:28.083 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:28.083 21:50:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:28.083 21:50:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:28.083 21:50:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:19:28.341 /dev/nbd13 00:19:28.341 21:50:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:19:28.341 21:50:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:19:28.341 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:19:28.341 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:28.341 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:28.341 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:28.341 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:19:28.341 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:28.341 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:28.341 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:28.341 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:28.341 1+0 records in 00:19:28.341 1+0 records out 00:19:28.341 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000914924 s, 4.5 MB/s 00:19:28.341 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:28.341 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:28.341 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:28.341 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:28.341 21:50:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:28.341 21:50:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:28.341 21:50:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:28.341 21:50:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:28.341 21:50:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:28.341 21:50:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:28.599 21:50:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:28.599 { 00:19:28.599 "nbd_device": "/dev/nbd0", 00:19:28.599 "bdev_name": "nvme0n1" 00:19:28.599 }, 00:19:28.599 { 00:19:28.599 "nbd_device": "/dev/nbd1", 00:19:28.599 "bdev_name": "nvme1n1" 00:19:28.599 }, 00:19:28.599 { 00:19:28.599 "nbd_device": "/dev/nbd10", 00:19:28.599 "bdev_name": "nvme1n2" 00:19:28.599 }, 00:19:28.599 { 00:19:28.599 "nbd_device": "/dev/nbd11", 00:19:28.599 "bdev_name": "nvme1n3" 00:19:28.599 }, 00:19:28.599 { 00:19:28.599 "nbd_device": "/dev/nbd12", 00:19:28.599 "bdev_name": "nvme2n1" 00:19:28.599 }, 00:19:28.599 { 00:19:28.599 "nbd_device": "/dev/nbd13", 00:19:28.599 "bdev_name": "nvme3n1" 00:19:28.599 } 00:19:28.599 ]' 00:19:28.599 21:50:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:28.599 21:50:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:28.599 { 00:19:28.599 "nbd_device": "/dev/nbd0", 00:19:28.599 "bdev_name": "nvme0n1" 00:19:28.599 }, 00:19:28.599 { 00:19:28.599 "nbd_device": "/dev/nbd1", 00:19:28.599 "bdev_name": "nvme1n1" 00:19:28.599 }, 00:19:28.599 { 00:19:28.599 "nbd_device": "/dev/nbd10", 00:19:28.599 "bdev_name": "nvme1n2" 00:19:28.599 }, 00:19:28.599 { 00:19:28.599 "nbd_device": "/dev/nbd11", 00:19:28.599 "bdev_name": "nvme1n3" 00:19:28.599 }, 00:19:28.599 { 00:19:28.599 "nbd_device": "/dev/nbd12", 00:19:28.599 "bdev_name": "nvme2n1" 00:19:28.599 }, 00:19:28.599 { 00:19:28.599 "nbd_device": "/dev/nbd13", 00:19:28.599 "bdev_name": "nvme3n1" 00:19:28.599 } 00:19:28.599 ]' 00:19:28.600 21:50:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:19:28.600 /dev/nbd1 00:19:28.600 /dev/nbd10 00:19:28.600 /dev/nbd11 00:19:28.600 /dev/nbd12 00:19:28.600 /dev/nbd13' 00:19:28.600 21:50:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:19:28.600 /dev/nbd1 00:19:28.600 /dev/nbd10 00:19:28.600 /dev/nbd11 00:19:28.600 /dev/nbd12 00:19:28.600 /dev/nbd13' 00:19:28.600 21:50:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:28.600 21:50:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:19:28.600 21:50:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:19:28.600 21:50:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:19:28.600 21:50:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:19:28.600 21:50:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:19:28.600 21:50:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:28.600 21:50:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:28.600 21:50:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:28.600 21:50:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:28.600 21:50:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:28.600 21:50:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:28.600 256+0 records in 00:19:28.600 256+0 records out 00:19:28.600 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114871 s, 91.3 MB/s 00:19:28.600 21:50:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:28.600 21:50:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:28.600 256+0 records in 00:19:28.600 256+0 records out 00:19:28.600 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123353 s, 8.5 MB/s 00:19:28.600 21:50:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:28.600 21:50:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:19:28.858 256+0 records in 00:19:28.858 256+0 records out 00:19:28.858 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128625 s, 8.2 MB/s 00:19:28.858 21:50:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:28.858 21:50:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:19:28.858 256+0 records in 00:19:28.858 256+0 records out 00:19:28.858 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12696 s, 8.3 MB/s 00:19:28.858 21:50:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:28.858 21:50:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:19:29.116 256+0 records in 00:19:29.116 256+0 records out 00:19:29.116 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125544 s, 8.4 MB/s 00:19:29.116 21:50:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:29.116 21:50:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:19:29.116 256+0 records in 00:19:29.116 256+0 records out 00:19:29.116 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157693 s, 6.6 MB/s 00:19:29.374 21:50:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:29.374 21:50:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:19:29.374 256+0 records in 00:19:29.374 256+0 records out 00:19:29.374 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128356 s, 8.2 MB/s 00:19:29.374 21:50:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:19:29.374 21:50:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:29.374 21:50:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:29.374 21:50:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:29.374 21:50:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:29.374 21:50:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:29.374 21:50:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:29.374 21:50:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:29.374 21:50:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:29.374 21:50:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:29.374 21:50:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:19:29.374 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:29.374 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:19:29.374 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:29.374 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:19:29.374 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:29.374 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:19:29.374 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:29.374 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:19:29.374 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:29.374 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:29.374 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:29.374 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:29.374 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:29.374 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:29.374 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:29.374 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:29.632 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:29.632 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:29.633 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:29.633 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:29.633 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:29.633 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:29.633 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:29.633 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:29.633 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:29.633 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:19:29.891 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:29.891 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:29.891 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:29.891 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:29.891 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:29.891 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:29.891 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:29.891 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:29.891 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:29.891 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:19:30.149 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:19:30.149 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:19:30.149 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:19:30.149 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:30.149 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:30.149 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:19:30.149 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:30.149 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:30.149 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:30.149 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:19:30.408 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:19:30.408 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:19:30.408 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:19:30.408 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:30.408 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:30.408 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:19:30.408 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:30.408 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:30.408 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:30.408 21:50:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:19:30.667 21:50:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:19:30.667 21:50:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:19:30.667 21:50:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:19:30.667 21:50:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:30.667 21:50:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:30.667 21:50:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:19:30.667 21:50:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:30.667 21:50:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:30.667 21:50:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:30.667 21:50:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:19:30.667 21:50:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:19:30.667 21:50:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:19:30.667 21:50:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:19:30.667 21:50:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:30.925 21:50:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:30.925 21:50:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:19:30.925 21:50:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:30.925 21:50:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:30.925 21:50:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:30.926 21:50:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:30.926 21:50:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:30.926 21:50:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:30.926 21:50:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:30.926 21:50:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:30.926 21:50:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:30.926 21:50:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:30.926 21:50:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:30.926 21:50:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:30.926 21:50:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:30.926 21:50:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:30.926 21:50:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:30.926 21:50:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:30.926 21:50:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:30.926 21:50:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:30.926 21:50:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:30.926 21:50:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:30.926 21:50:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:31.184 malloc_lvol_verify 00:19:31.184 21:50:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:31.442 020bfe70-d503-4e44-981a-8e6b5d0b4df2 00:19:31.442 21:50:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:31.700 6c717491-0987-43f5-a913-13f6cf25a36e 00:19:31.700 21:50:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:31.959 /dev/nbd0 00:19:31.959 21:50:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:31.959 21:50:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:31.959 21:50:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:31.959 21:50:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:31.959 21:50:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:31.959 mke2fs 1.47.0 (5-Feb-2023) 00:19:31.959 Discarding device blocks: 0/4096 done 00:19:31.959 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:31.959 00:19:31.959 Allocating group tables: 0/1 done 00:19:31.959 Writing inode tables: 0/1 done 00:19:31.959 Creating journal (1024 blocks): done 00:19:31.959 Writing superblocks and filesystem accounting information: 0/1 done 00:19:31.959 00:19:31.959 21:50:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:31.959 21:50:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:31.959 21:50:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:31.959 21:50:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:31.959 21:50:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:31.959 21:50:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:31.959 21:50:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:32.217 21:50:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:32.217 21:50:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:32.217 21:50:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:32.217 21:50:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:32.217 21:50:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:32.217 21:50:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:32.217 21:50:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:32.217 21:50:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:32.217 21:50:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 75380 00:19:32.217 21:50:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 75380 ']' 00:19:32.217 21:50:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 75380 00:19:32.217 21:50:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:19:32.217 21:50:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:32.217 21:50:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75380 00:19:32.217 21:50:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:32.217 21:50:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:32.217 killing process with pid 75380 00:19:32.217 21:50:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75380' 00:19:32.217 21:50:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 75380 00:19:32.217 21:50:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 75380 00:19:33.593 21:50:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:33.593 00:19:33.593 real 0m11.178s 00:19:33.593 user 0m14.290s 00:19:33.593 sys 0m4.795s 00:19:33.593 21:50:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:33.593 21:50:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:33.593 ************************************ 00:19:33.593 END TEST bdev_nbd 00:19:33.593 ************************************ 00:19:33.593 21:50:41 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:19:33.593 21:50:41 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:19:33.593 21:50:41 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:19:33.593 21:50:41 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:19:33.593 21:50:41 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:33.593 21:50:41 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:33.593 21:50:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:33.593 ************************************ 00:19:33.593 START TEST bdev_fio 00:19:33.593 ************************************ 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:19:33.593 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n2]' 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n2 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n3]' 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n3 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:33.593 ************************************ 00:19:33.593 START TEST bdev_fio_rw_verify 00:19:33.593 ************************************ 00:19:33.593 21:50:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:33.594 21:50:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:33.594 21:50:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:33.594 21:50:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:33.594 21:50:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:33.594 21:50:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:33.594 21:50:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:19:33.594 21:50:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:33.594 21:50:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:33.594 21:50:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:19:33.594 21:50:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:33.594 21:50:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:33.594 21:50:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:33.594 21:50:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:33.594 21:50:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:19:33.594 21:50:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:33.594 21:50:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:33.852 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:33.852 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:33.852 job_nvme1n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:33.852 job_nvme1n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:33.852 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:33.852 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:33.852 fio-3.35 00:19:33.852 Starting 6 threads 00:19:46.084 00:19:46.084 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=75785: Tue Dec 10 21:50:52 2024 00:19:46.084 read: IOPS=32.5k, BW=127MiB/s (133MB/s)(1269MiB/10001msec) 00:19:46.084 slat (usec): min=2, max=614, avg= 6.23, stdev= 3.17 00:19:46.084 clat (usec): min=90, max=5169, avg=600.44, stdev=178.10 00:19:46.084 lat (usec): min=97, max=5194, avg=606.67, stdev=178.76 00:19:46.084 clat percentiles (usec): 00:19:46.084 | 50.000th=[ 635], 99.000th=[ 1057], 99.900th=[ 1647], 99.990th=[ 2999], 00:19:46.084 | 99.999th=[ 5145] 00:19:46.084 write: IOPS=32.8k, BW=128MiB/s (134MB/s)(1280MiB/10001msec); 0 zone resets 00:19:46.084 slat (usec): min=11, max=1260, avg=19.22, stdev=19.57 00:19:46.084 clat (usec): min=80, max=3315, avg=673.04, stdev=184.30 00:19:46.084 lat (usec): min=100, max=3331, avg=692.26, stdev=185.90 00:19:46.084 clat percentiles (usec): 00:19:46.084 | 50.000th=[ 685], 99.000th=[ 1303], 99.900th=[ 1926], 99.990th=[ 2933], 00:19:46.084 | 99.999th=[ 3294] 00:19:46.084 bw ( KiB/s): min=107856, max=152016, per=100.00%, avg=131497.21, stdev=2209.32, samples=114 00:19:46.084 iops : min=26964, max=38004, avg=32874.21, stdev=552.33, samples=114 00:19:46.084 lat (usec) : 100=0.01%, 250=3.11%, 500=14.79%, 750=64.71%, 1000=15.04% 00:19:46.084 lat (msec) : 2=2.29%, 4=0.06%, 10=0.01% 00:19:46.084 cpu : usr=66.53%, sys=22.88%, ctx=7431, majf=0, minf=27043 00:19:46.084 IO depths : 1=12.1%, 2=24.6%, 4=50.4%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:46.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.084 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.084 issued rwts: total=324904,327756,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.084 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:46.084 00:19:46.084 Run status group 0 (all jobs): 00:19:46.084 READ: bw=127MiB/s (133MB/s), 127MiB/s-127MiB/s (133MB/s-133MB/s), io=1269MiB (1331MB), run=10001-10001msec 00:19:46.084 WRITE: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=1280MiB (1342MB), run=10001-10001msec 00:19:46.084 ----------------------------------------------------- 00:19:46.084 Suppressions used: 00:19:46.084 count bytes template 00:19:46.084 6 48 /usr/src/fio/parse.c 00:19:46.084 2631 252576 /usr/src/fio/iolog.c 00:19:46.084 1 8 libtcmalloc_minimal.so 00:19:46.084 1 904 libcrypto.so 00:19:46.084 ----------------------------------------------------- 00:19:46.084 00:19:46.084 00:19:46.084 real 0m12.532s 00:19:46.084 user 0m41.907s 00:19:46.084 sys 0m14.187s 00:19:46.084 21:50:53 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:46.084 ************************************ 00:19:46.084 END TEST bdev_fio_rw_verify 00:19:46.084 ************************************ 00:19:46.084 21:50:53 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:19:46.084 21:50:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:19:46.084 21:50:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:46.344 21:50:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:19:46.344 21:50:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:46.344 21:50:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:19:46.344 21:50:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:19:46.344 21:50:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:46.344 21:50:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:46.344 21:50:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:46.344 21:50:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:19:46.344 21:50:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:46.344 21:50:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:46.344 21:50:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:46.344 21:50:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:19:46.344 21:50:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:19:46.344 21:50:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:19:46.344 21:50:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:19:46.344 21:50:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "d66bc3b1-25f9-4452-ad1f-fcc18ae3349a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d66bc3b1-25f9-4452-ad1f-fcc18ae3349a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "eac453c8-13c8-417a-ac66-58778f107da1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "eac453c8-13c8-417a-ac66-58778f107da1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "7ceb1fc2-f5d2-4e2c-82aa-f51b2a0738f5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7ceb1fc2-f5d2-4e2c-82aa-f51b2a0738f5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "42e624a1-4d06-440b-abf9-cb1b15a56bfb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "42e624a1-4d06-440b-abf9-cb1b15a56bfb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "c34553c9-04b6-4f70-ba18-e1cdb8bd608e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "c34553c9-04b6-4f70-ba18-e1cdb8bd608e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "0f930404-eb09-46f1-8e7a-387c776b563e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "0f930404-eb09-46f1-8e7a-387c776b563e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:19:46.344 21:50:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:19:46.344 21:50:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:46.344 /home/vagrant/spdk_repo/spdk 00:19:46.344 21:50:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:19:46.344 21:50:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:19:46.344 ************************************ 00:19:46.344 END TEST bdev_fio 00:19:46.344 ************************************ 00:19:46.344 21:50:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:19:46.344 00:19:46.344 real 0m12.769s 00:19:46.344 user 0m42.026s 00:19:46.344 sys 0m14.308s 00:19:46.344 21:50:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:46.344 21:50:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:46.344 21:50:53 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:46.344 21:50:53 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:46.344 21:50:53 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:46.344 21:50:53 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:46.344 21:50:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:46.344 ************************************ 00:19:46.344 START TEST bdev_verify 00:19:46.344 ************************************ 00:19:46.344 21:50:53 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:46.344 [2024-12-10 21:50:54.049578] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:19:46.344 [2024-12-10 21:50:54.049693] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75962 ] 00:19:46.603 [2024-12-10 21:50:54.232471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:46.900 [2024-12-10 21:50:54.358477] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.900 [2024-12-10 21:50:54.358509] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:47.159 Running I/O for 5 seconds... 00:19:49.474 23968.00 IOPS, 93.62 MiB/s [2024-12-10T21:50:58.142Z] 23840.00 IOPS, 93.12 MiB/s [2024-12-10T21:50:59.078Z] 24032.00 IOPS, 93.87 MiB/s [2024-12-10T21:51:00.015Z] 24248.00 IOPS, 94.72 MiB/s [2024-12-10T21:51:00.015Z] 24217.60 IOPS, 94.60 MiB/s 00:19:52.284 Latency(us) 00:19:52.284 [2024-12-10T21:51:00.015Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.284 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:52.284 Verification LBA range: start 0x0 length 0x20000 00:19:52.284 nvme0n1 : 5.01 1864.70 7.28 0.00 0.00 68526.88 11580.66 68220.61 00:19:52.284 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:52.284 Verification LBA range: start 0x20000 length 0x20000 00:19:52.284 nvme0n1 : 5.04 1827.67 7.14 0.00 0.00 69923.69 16318.20 56429.39 00:19:52.284 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:52.284 Verification LBA range: start 0x0 length 0x80000 00:19:52.284 nvme1n1 : 5.04 1852.26 7.24 0.00 0.00 68888.67 10317.31 75800.67 00:19:52.284 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:52.284 Verification LBA range: start 0x80000 length 0x80000 00:19:52.284 nvme1n1 : 5.04 1827.15 7.14 0.00 0.00 69837.56 9843.56 58956.08 00:19:52.284 Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:52.284 Verification LBA range: start 0x0 length 0x80000 00:19:52.284 nvme1n2 : 5.07 1867.84 7.30 0.00 0.00 68199.24 9843.56 69483.95 00:19:52.284 Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:52.284 Verification LBA range: start 0x80000 length 0x80000 00:19:52.284 nvme1n2 : 5.06 1847.16 7.22 0.00 0.00 68961.32 9106.61 61482.77 00:19:52.284 Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:52.284 Verification LBA range: start 0x0 length 0x80000 00:19:52.284 nvme1n3 : 5.03 1857.58 7.26 0.00 0.00 68476.33 9948.84 62325.00 00:19:52.284 Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:52.284 Verification LBA range: start 0x80000 length 0x80000 00:19:52.284 nvme1n3 : 5.06 1846.75 7.21 0.00 0.00 68888.13 9475.08 58113.85 00:19:52.284 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:52.284 Verification LBA range: start 0x0 length 0xbd0bd 00:19:52.284 nvme2n1 : 5.07 2698.89 10.54 0.00 0.00 47001.29 6316.72 55587.16 00:19:52.284 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:52.285 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:19:52.285 nvme2n1 : 5.06 2809.52 10.97 0.00 0.00 45170.64 5158.66 55587.16 00:19:52.285 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:52.285 Verification LBA range: start 0x0 length 0xa0000 00:19:52.285 nvme3n1 : 5.08 1888.78 7.38 0.00 0.00 67088.78 5921.93 64430.57 00:19:52.285 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:52.285 Verification LBA range: start 0xa0000 length 0xa0000 00:19:52.285 nvme3n1 : 5.05 1823.25 7.12 0.00 0.00 69558.60 9317.17 60640.54 00:19:52.285 [2024-12-10T21:51:00.016Z] =================================================================================================================== 00:19:52.285 [2024-12-10T21:51:00.016Z] Total : 24011.56 93.80 0.00 0.00 63592.43 5158.66 75800.67 00:19:53.662 00:19:53.662 real 0m7.176s 00:19:53.662 user 0m10.782s 00:19:53.662 sys 0m2.165s 00:19:53.662 ************************************ 00:19:53.662 END TEST bdev_verify 00:19:53.662 ************************************ 00:19:53.662 21:51:01 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:53.662 21:51:01 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:53.662 21:51:01 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:53.662 21:51:01 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:53.662 21:51:01 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:53.662 21:51:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:53.662 ************************************ 00:19:53.662 START TEST bdev_verify_big_io 00:19:53.662 ************************************ 00:19:53.662 21:51:01 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:53.662 [2024-12-10 21:51:01.303025] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:19:53.662 [2024-12-10 21:51:01.303177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76067 ] 00:19:53.921 [2024-12-10 21:51:01.486354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:53.921 [2024-12-10 21:51:01.614378] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.921 [2024-12-10 21:51:01.614409] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:54.490 Running I/O for 5 seconds... 00:19:59.155 2272.00 IOPS, 142.00 MiB/s [2024-12-10T21:51:08.306Z] 3372.00 IOPS, 210.75 MiB/s [2024-12-10T21:51:08.306Z] 3285.33 IOPS, 205.33 MiB/s 00:20:00.575 Latency(us) 00:20:00.575 [2024-12-10T21:51:08.306Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.575 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:00.575 Verification LBA range: start 0x0 length 0x2000 00:20:00.575 nvme0n1 : 5.47 210.53 13.16 0.00 0.00 580814.87 4184.83 896132.42 00:20:00.575 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:00.575 Verification LBA range: start 0x2000 length 0x2000 00:20:00.575 nvme0n1 : 5.64 199.94 12.50 0.00 0.00 618668.64 5027.06 693997.29 00:20:00.575 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:00.575 Verification LBA range: start 0x0 length 0x8000 00:20:00.575 nvme1n1 : 5.65 165.75 10.36 0.00 0.00 723109.10 104857.60 1253237.82 00:20:00.575 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:00.575 Verification LBA range: start 0x8000 length 0x8000 00:20:00.575 nvme1n1 : 5.64 195.60 12.23 0.00 0.00 626598.50 65272.80 690628.37 00:20:00.575 Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:00.575 Verification LBA range: start 0x0 length 0x8000 00:20:00.575 nvme1n2 : 5.70 162.93 10.18 0.00 0.00 719804.19 120438.85 1623818.90 00:20:00.575 Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:00.575 Verification LBA range: start 0x8000 length 0x8000 00:20:00.575 nvme1n2 : 5.69 171.58 10.72 0.00 0.00 697627.35 41900.93 1569916.20 00:20:00.575 Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:00.575 Verification LBA range: start 0x0 length 0x8000 00:20:00.575 nvme1n3 : 5.71 179.38 11.21 0.00 0.00 650402.85 53271.03 1064578.36 00:20:00.575 Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:00.575 Verification LBA range: start 0x8000 length 0x8000 00:20:00.575 nvme1n3 : 5.68 160.44 10.03 0.00 0.00 727223.11 45059.29 1873118.89 00:20:00.575 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:00.575 Verification LBA range: start 0x0 length 0xbd0b 00:20:00.575 nvme2n1 : 5.72 215.30 13.46 0.00 0.00 530420.68 7264.23 1131956.74 00:20:00.575 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:00.575 Verification LBA range: start 0xbd0b length 0xbd0b 00:20:00.575 nvme2n1 : 5.65 169.83 10.61 0.00 0.00 678555.75 94329.73 1468848.63 00:20:00.575 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:00.575 Verification LBA range: start 0x0 length 0xa000 00:20:00.575 nvme3n1 : 5.72 205.74 12.86 0.00 0.00 541342.93 10896.35 667045.94 00:20:00.575 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:00.575 Verification LBA range: start 0xa000 length 0xa000 00:20:00.575 nvme3n1 : 5.70 203.68 12.73 0.00 0.00 556483.23 2434.57 619881.07 00:20:00.575 [2024-12-10T21:51:08.306Z] =================================================================================================================== 00:20:00.575 [2024-12-10T21:51:08.306Z] Total : 2240.70 140.04 0.00 0.00 630571.09 2434.57 1873118.89 00:20:01.954 00:20:01.954 real 0m8.124s 00:20:01.954 user 0m14.639s 00:20:01.954 sys 0m0.654s 00:20:01.954 21:51:09 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:01.954 21:51:09 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:20:01.954 ************************************ 00:20:01.954 END TEST bdev_verify_big_io 00:20:01.954 ************************************ 00:20:01.954 21:51:09 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:01.954 21:51:09 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:01.954 21:51:09 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:01.954 21:51:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:01.954 ************************************ 00:20:01.954 START TEST bdev_write_zeroes 00:20:01.954 ************************************ 00:20:01.954 21:51:09 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:01.954 [2024-12-10 21:51:09.500366] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:20:01.954 [2024-12-10 21:51:09.500485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76179 ] 00:20:01.954 [2024-12-10 21:51:09.682479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.213 [2024-12-10 21:51:09.807337] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.782 Running I/O for 1 seconds... 00:20:03.718 46656.00 IOPS, 182.25 MiB/s 00:20:03.718 Latency(us) 00:20:03.718 [2024-12-10T21:51:11.449Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.718 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:03.718 nvme0n1 : 1.04 7045.48 27.52 0.00 0.00 18152.14 8422.30 29267.48 00:20:03.718 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:03.718 nvme1n1 : 1.04 7036.32 27.49 0.00 0.00 18163.91 8896.05 29688.60 00:20:03.718 Job: nvme1n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:03.718 nvme1n2 : 1.04 7026.94 27.45 0.00 0.00 18175.68 9053.97 30320.27 00:20:03.718 Job: nvme1n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:03.718 nvme1n3 : 1.04 7017.81 27.41 0.00 0.00 18188.83 9159.25 32215.29 00:20:03.718 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:03.718 nvme2n1 : 1.04 11130.77 43.48 0.00 0.00 11400.11 4842.82 26319.68 00:20:03.718 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:03.718 nvme3n1 : 1.03 7055.13 27.56 0.00 0.00 17960.19 5237.62 32004.73 00:20:03.718 [2024-12-10T21:51:11.449Z] =================================================================================================================== 00:20:03.718 [2024-12-10T21:51:11.449Z] Total : 46312.45 180.91 0.00 0.00 16506.61 4842.82 32215.29 00:20:05.097 00:20:05.097 real 0m3.043s 00:20:05.097 user 0m2.267s 00:20:05.097 sys 0m0.580s 00:20:05.097 21:51:12 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:05.097 ************************************ 00:20:05.097 END TEST bdev_write_zeroes 00:20:05.097 ************************************ 00:20:05.097 21:51:12 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:20:05.097 21:51:12 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:05.097 21:51:12 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:05.097 21:51:12 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:05.097 21:51:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:05.097 ************************************ 00:20:05.097 START TEST bdev_json_nonenclosed 00:20:05.097 ************************************ 00:20:05.097 21:51:12 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:05.097 [2024-12-10 21:51:12.620458] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:20:05.097 [2024-12-10 21:51:12.620597] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76238 ] 00:20:05.097 [2024-12-10 21:51:12.792887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.356 [2024-12-10 21:51:12.916222] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.356 [2024-12-10 21:51:12.916338] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:20:05.356 [2024-12-10 21:51:12.916360] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:05.356 [2024-12-10 21:51:12.916373] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:05.615 00:20:05.615 real 0m0.646s 00:20:05.615 user 0m0.393s 00:20:05.615 sys 0m0.149s 00:20:05.615 21:51:13 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:05.615 21:51:13 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:20:05.615 ************************************ 00:20:05.615 END TEST bdev_json_nonenclosed 00:20:05.615 ************************************ 00:20:05.615 21:51:13 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:05.615 21:51:13 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:05.615 21:51:13 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:05.615 21:51:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:05.615 ************************************ 00:20:05.615 START TEST bdev_json_nonarray 00:20:05.615 ************************************ 00:20:05.615 21:51:13 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:05.875 [2024-12-10 21:51:13.348427] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:20:05.875 [2024-12-10 21:51:13.348567] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76264 ] 00:20:05.875 [2024-12-10 21:51:13.521891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.134 [2024-12-10 21:51:13.651696] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.134 [2024-12-10 21:51:13.651827] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:20:06.134 [2024-12-10 21:51:13.651851] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:06.134 [2024-12-10 21:51:13.651865] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:06.393 00:20:06.393 real 0m0.657s 00:20:06.393 user 0m0.394s 00:20:06.393 sys 0m0.158s 00:20:06.393 21:51:13 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:06.393 21:51:13 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:20:06.393 ************************************ 00:20:06.393 END TEST bdev_json_nonarray 00:20:06.393 ************************************ 00:20:06.393 21:51:13 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:20:06.393 21:51:13 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:20:06.393 21:51:13 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:20:06.393 21:51:13 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:20:06.393 21:51:13 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:20:06.393 21:51:13 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:20:06.393 21:51:13 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:06.393 21:51:13 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:20:06.393 21:51:13 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:20:06.393 21:51:13 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:20:06.393 21:51:13 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:20:06.393 21:51:13 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:07.329 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:08.262 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:13.566 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:13.566 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:20:13.566 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:20:13.566 ************************************ 00:20:13.566 END TEST blockdev_xnvme 00:20:13.566 ************************************ 00:20:13.566 00:20:13.566 real 1m1.357s 00:20:13.566 user 1m40.139s 00:20:13.566 sys 0m36.417s 00:20:13.566 21:51:20 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:13.566 21:51:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:13.566 21:51:20 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:20:13.566 21:51:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:13.566 21:51:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:13.566 21:51:20 -- common/autotest_common.sh@10 -- # set +x 00:20:13.566 ************************************ 00:20:13.566 START TEST ublk 00:20:13.566 ************************************ 00:20:13.566 21:51:20 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:20:13.566 * Looking for test storage... 00:20:13.566 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:20:13.566 21:51:20 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:13.566 21:51:20 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:20:13.566 21:51:20 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:13.566 21:51:20 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:13.566 21:51:20 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:13.566 21:51:20 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:13.566 21:51:20 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:13.566 21:51:20 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:20:13.566 21:51:20 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:20:13.566 21:51:20 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:20:13.566 21:51:20 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:20:13.566 21:51:20 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:20:13.566 21:51:20 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:20:13.566 21:51:20 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:20:13.566 21:51:20 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:13.566 21:51:20 ublk -- scripts/common.sh@344 -- # case "$op" in 00:20:13.566 21:51:20 ublk -- scripts/common.sh@345 -- # : 1 00:20:13.566 21:51:20 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:13.566 21:51:20 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:13.566 21:51:20 ublk -- scripts/common.sh@365 -- # decimal 1 00:20:13.566 21:51:20 ublk -- scripts/common.sh@353 -- # local d=1 00:20:13.566 21:51:20 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:13.566 21:51:20 ublk -- scripts/common.sh@355 -- # echo 1 00:20:13.566 21:51:20 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:20:13.566 21:51:20 ublk -- scripts/common.sh@366 -- # decimal 2 00:20:13.566 21:51:20 ublk -- scripts/common.sh@353 -- # local d=2 00:20:13.566 21:51:20 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:13.566 21:51:20 ublk -- scripts/common.sh@355 -- # echo 2 00:20:13.566 21:51:20 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:20:13.566 21:51:20 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:13.566 21:51:20 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:13.566 21:51:20 ublk -- scripts/common.sh@368 -- # return 0 00:20:13.566 21:51:20 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:13.566 21:51:20 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:13.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.566 --rc genhtml_branch_coverage=1 00:20:13.566 --rc genhtml_function_coverage=1 00:20:13.566 --rc genhtml_legend=1 00:20:13.566 --rc geninfo_all_blocks=1 00:20:13.566 --rc geninfo_unexecuted_blocks=1 00:20:13.566 00:20:13.566 ' 00:20:13.566 21:51:20 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:13.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.566 --rc genhtml_branch_coverage=1 00:20:13.566 --rc genhtml_function_coverage=1 00:20:13.566 --rc genhtml_legend=1 00:20:13.566 --rc geninfo_all_blocks=1 00:20:13.566 --rc geninfo_unexecuted_blocks=1 00:20:13.566 00:20:13.566 ' 00:20:13.566 21:51:20 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:13.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.566 --rc genhtml_branch_coverage=1 00:20:13.566 --rc genhtml_function_coverage=1 00:20:13.566 --rc genhtml_legend=1 00:20:13.566 --rc geninfo_all_blocks=1 00:20:13.566 --rc geninfo_unexecuted_blocks=1 00:20:13.566 00:20:13.566 ' 00:20:13.566 21:51:20 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:13.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.566 --rc genhtml_branch_coverage=1 00:20:13.566 --rc genhtml_function_coverage=1 00:20:13.566 --rc genhtml_legend=1 00:20:13.566 --rc geninfo_all_blocks=1 00:20:13.566 --rc geninfo_unexecuted_blocks=1 00:20:13.566 00:20:13.566 ' 00:20:13.566 21:51:20 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:20:13.566 21:51:20 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:20:13.566 21:51:20 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:20:13.566 21:51:20 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:20:13.566 21:51:20 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:20:13.566 21:51:20 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:20:13.566 21:51:20 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:20:13.566 21:51:20 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:20:13.566 21:51:20 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:20:13.566 21:51:20 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:20:13.566 21:51:20 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:20:13.566 21:51:20 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:20:13.566 21:51:20 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:20:13.566 21:51:20 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:20:13.566 21:51:20 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:20:13.566 21:51:20 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:20:13.566 21:51:20 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:20:13.566 21:51:20 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:20:13.566 21:51:20 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:20:13.566 21:51:20 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:20:13.566 21:51:20 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:13.566 21:51:20 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:13.566 21:51:20 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:13.566 ************************************ 00:20:13.566 START TEST test_save_ublk_config 00:20:13.566 ************************************ 00:20:13.566 21:51:20 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:20:13.566 21:51:20 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:20:13.566 21:51:20 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=76570 00:20:13.566 21:51:20 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:20:13.566 21:51:20 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:20:13.566 21:51:20 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 76570 00:20:13.566 21:51:20 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 76570 ']' 00:20:13.566 21:51:20 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.566 21:51:20 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:13.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.566 21:51:20 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.566 21:51:20 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:13.566 21:51:20 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:13.566 [2024-12-10 21:51:21.014252] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:20:13.567 [2024-12-10 21:51:21.014409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76570 ] 00:20:13.567 [2024-12-10 21:51:21.196604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.826 [2024-12-10 21:51:21.315217] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.761 21:51:22 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:14.761 21:51:22 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:20:14.761 21:51:22 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:20:14.761 21:51:22 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:20:14.761 21:51:22 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.761 21:51:22 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:14.761 [2024-12-10 21:51:22.333083] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:14.761 [2024-12-10 21:51:22.334322] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:14.761 malloc0 00:20:14.761 [2024-12-10 21:51:22.422236] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:20:14.761 [2024-12-10 21:51:22.422366] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:20:14.761 [2024-12-10 21:51:22.422381] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:20:14.761 [2024-12-10 21:51:22.422390] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:20:14.761 [2024-12-10 21:51:22.430104] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:14.761 [2024-12-10 21:51:22.430130] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:14.761 [2024-12-10 21:51:22.438081] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:14.761 [2024-12-10 21:51:22.438203] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:20:14.761 [2024-12-10 21:51:22.462087] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:20:14.761 0 00:20:14.761 21:51:22 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.761 21:51:22 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:20:14.761 21:51:22 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.761 21:51:22 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:15.329 21:51:22 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.329 21:51:22 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:20:15.329 "subsystems": [ 00:20:15.329 { 00:20:15.329 "subsystem": "fsdev", 00:20:15.329 "config": [ 00:20:15.329 { 00:20:15.329 "method": "fsdev_set_opts", 00:20:15.329 "params": { 00:20:15.329 "fsdev_io_pool_size": 65535, 00:20:15.329 "fsdev_io_cache_size": 256 00:20:15.329 } 00:20:15.329 } 00:20:15.329 ] 00:20:15.329 }, 00:20:15.329 { 00:20:15.329 "subsystem": "keyring", 00:20:15.329 "config": [] 00:20:15.329 }, 00:20:15.329 { 00:20:15.329 "subsystem": "iobuf", 00:20:15.329 "config": [ 00:20:15.329 { 00:20:15.329 "method": "iobuf_set_options", 00:20:15.329 "params": { 00:20:15.329 "small_pool_count": 8192, 00:20:15.329 "large_pool_count": 1024, 00:20:15.329 "small_bufsize": 8192, 00:20:15.329 "large_bufsize": 135168, 00:20:15.329 "enable_numa": false 00:20:15.329 } 00:20:15.329 } 00:20:15.329 ] 00:20:15.329 }, 00:20:15.329 { 00:20:15.329 "subsystem": "sock", 00:20:15.329 "config": [ 00:20:15.329 { 00:20:15.329 "method": "sock_set_default_impl", 00:20:15.329 "params": { 00:20:15.329 "impl_name": "posix" 00:20:15.329 } 00:20:15.329 }, 00:20:15.329 { 00:20:15.329 "method": "sock_impl_set_options", 00:20:15.329 "params": { 00:20:15.329 "impl_name": "ssl", 00:20:15.329 "recv_buf_size": 4096, 00:20:15.329 "send_buf_size": 4096, 00:20:15.329 "enable_recv_pipe": true, 00:20:15.329 "enable_quickack": false, 00:20:15.329 "enable_placement_id": 0, 00:20:15.329 "enable_zerocopy_send_server": true, 00:20:15.329 "enable_zerocopy_send_client": false, 00:20:15.329 "zerocopy_threshold": 0, 00:20:15.329 "tls_version": 0, 00:20:15.329 "enable_ktls": false 00:20:15.329 } 00:20:15.329 }, 00:20:15.329 { 00:20:15.329 "method": "sock_impl_set_options", 00:20:15.329 "params": { 00:20:15.329 "impl_name": "posix", 00:20:15.329 "recv_buf_size": 2097152, 00:20:15.329 "send_buf_size": 2097152, 00:20:15.329 "enable_recv_pipe": true, 00:20:15.329 "enable_quickack": false, 00:20:15.329 "enable_placement_id": 0, 00:20:15.329 "enable_zerocopy_send_server": true, 00:20:15.329 "enable_zerocopy_send_client": false, 00:20:15.329 "zerocopy_threshold": 0, 00:20:15.329 "tls_version": 0, 00:20:15.329 "enable_ktls": false 00:20:15.329 } 00:20:15.329 } 00:20:15.329 ] 00:20:15.329 }, 00:20:15.329 { 00:20:15.329 "subsystem": "vmd", 00:20:15.329 "config": [] 00:20:15.329 }, 00:20:15.329 { 00:20:15.329 "subsystem": "accel", 00:20:15.329 "config": [ 00:20:15.329 { 00:20:15.329 "method": "accel_set_options", 00:20:15.329 "params": { 00:20:15.329 "small_cache_size": 128, 00:20:15.329 "large_cache_size": 16, 00:20:15.329 "task_count": 2048, 00:20:15.329 "sequence_count": 2048, 00:20:15.329 "buf_count": 2048 00:20:15.329 } 00:20:15.329 } 00:20:15.329 ] 00:20:15.329 }, 00:20:15.329 { 00:20:15.329 "subsystem": "bdev", 00:20:15.329 "config": [ 00:20:15.329 { 00:20:15.329 "method": "bdev_set_options", 00:20:15.329 "params": { 00:20:15.329 "bdev_io_pool_size": 65535, 00:20:15.329 "bdev_io_cache_size": 256, 00:20:15.329 "bdev_auto_examine": true, 00:20:15.329 "iobuf_small_cache_size": 128, 00:20:15.329 "iobuf_large_cache_size": 16 00:20:15.329 } 00:20:15.329 }, 00:20:15.329 { 00:20:15.329 "method": "bdev_raid_set_options", 00:20:15.329 "params": { 00:20:15.329 "process_window_size_kb": 1024, 00:20:15.329 "process_max_bandwidth_mb_sec": 0 00:20:15.329 } 00:20:15.329 }, 00:20:15.329 { 00:20:15.329 "method": "bdev_iscsi_set_options", 00:20:15.329 "params": { 00:20:15.329 "timeout_sec": 30 00:20:15.329 } 00:20:15.329 }, 00:20:15.329 { 00:20:15.329 "method": "bdev_nvme_set_options", 00:20:15.329 "params": { 00:20:15.329 "action_on_timeout": "none", 00:20:15.329 "timeout_us": 0, 00:20:15.329 "timeout_admin_us": 0, 00:20:15.329 "keep_alive_timeout_ms": 10000, 00:20:15.329 "arbitration_burst": 0, 00:20:15.329 "low_priority_weight": 0, 00:20:15.329 "medium_priority_weight": 0, 00:20:15.329 "high_priority_weight": 0, 00:20:15.329 "nvme_adminq_poll_period_us": 10000, 00:20:15.329 "nvme_ioq_poll_period_us": 0, 00:20:15.329 "io_queue_requests": 0, 00:20:15.329 "delay_cmd_submit": true, 00:20:15.329 "transport_retry_count": 4, 00:20:15.329 "bdev_retry_count": 3, 00:20:15.329 "transport_ack_timeout": 0, 00:20:15.329 "ctrlr_loss_timeout_sec": 0, 00:20:15.329 "reconnect_delay_sec": 0, 00:20:15.329 "fast_io_fail_timeout_sec": 0, 00:20:15.329 "disable_auto_failback": false, 00:20:15.329 "generate_uuids": false, 00:20:15.329 "transport_tos": 0, 00:20:15.329 "nvme_error_stat": false, 00:20:15.329 "rdma_srq_size": 0, 00:20:15.329 "io_path_stat": false, 00:20:15.329 "allow_accel_sequence": false, 00:20:15.329 "rdma_max_cq_size": 0, 00:20:15.329 "rdma_cm_event_timeout_ms": 0, 00:20:15.329 "dhchap_digests": [ 00:20:15.329 "sha256", 00:20:15.329 "sha384", 00:20:15.329 "sha512" 00:20:15.329 ], 00:20:15.329 "dhchap_dhgroups": [ 00:20:15.329 "null", 00:20:15.329 "ffdhe2048", 00:20:15.329 "ffdhe3072", 00:20:15.329 "ffdhe4096", 00:20:15.329 "ffdhe6144", 00:20:15.329 "ffdhe8192" 00:20:15.329 ], 00:20:15.329 "rdma_umr_per_io": false 00:20:15.329 } 00:20:15.329 }, 00:20:15.329 { 00:20:15.329 "method": "bdev_nvme_set_hotplug", 00:20:15.329 "params": { 00:20:15.329 "period_us": 100000, 00:20:15.329 "enable": false 00:20:15.329 } 00:20:15.329 }, 00:20:15.329 { 00:20:15.329 "method": "bdev_malloc_create", 00:20:15.329 "params": { 00:20:15.329 "name": "malloc0", 00:20:15.329 "num_blocks": 8192, 00:20:15.329 "block_size": 4096, 00:20:15.329 "physical_block_size": 4096, 00:20:15.329 "uuid": "bed5bd50-261e-485a-805b-d1df5721783a", 00:20:15.329 "optimal_io_boundary": 0, 00:20:15.329 "md_size": 0, 00:20:15.329 "dif_type": 0, 00:20:15.329 "dif_is_head_of_md": false, 00:20:15.329 "dif_pi_format": 0 00:20:15.329 } 00:20:15.329 }, 00:20:15.329 { 00:20:15.329 "method": "bdev_wait_for_examine" 00:20:15.329 } 00:20:15.329 ] 00:20:15.329 }, 00:20:15.329 { 00:20:15.329 "subsystem": "scsi", 00:20:15.329 "config": null 00:20:15.329 }, 00:20:15.329 { 00:20:15.329 "subsystem": "scheduler", 00:20:15.329 "config": [ 00:20:15.329 { 00:20:15.329 "method": "framework_set_scheduler", 00:20:15.329 "params": { 00:20:15.329 "name": "static" 00:20:15.329 } 00:20:15.329 } 00:20:15.329 ] 00:20:15.329 }, 00:20:15.330 { 00:20:15.330 "subsystem": "vhost_scsi", 00:20:15.330 "config": [] 00:20:15.330 }, 00:20:15.330 { 00:20:15.330 "subsystem": "vhost_blk", 00:20:15.330 "config": [] 00:20:15.330 }, 00:20:15.330 { 00:20:15.330 "subsystem": "ublk", 00:20:15.330 "config": [ 00:20:15.330 { 00:20:15.330 "method": "ublk_create_target", 00:20:15.330 "params": { 00:20:15.330 "cpumask": "1" 00:20:15.330 } 00:20:15.330 }, 00:20:15.330 { 00:20:15.330 "method": "ublk_start_disk", 00:20:15.330 "params": { 00:20:15.330 "bdev_name": "malloc0", 00:20:15.330 "ublk_id": 0, 00:20:15.330 "num_queues": 1, 00:20:15.330 "queue_depth": 128 00:20:15.330 } 00:20:15.330 } 00:20:15.330 ] 00:20:15.330 }, 00:20:15.330 { 00:20:15.330 "subsystem": "nbd", 00:20:15.330 "config": [] 00:20:15.330 }, 00:20:15.330 { 00:20:15.330 "subsystem": "nvmf", 00:20:15.330 "config": [ 00:20:15.330 { 00:20:15.330 "method": "nvmf_set_config", 00:20:15.330 "params": { 00:20:15.330 "discovery_filter": "match_any", 00:20:15.330 "admin_cmd_passthru": { 00:20:15.330 "identify_ctrlr": false 00:20:15.330 }, 00:20:15.330 "dhchap_digests": [ 00:20:15.330 "sha256", 00:20:15.330 "sha384", 00:20:15.330 "sha512" 00:20:15.330 ], 00:20:15.330 "dhchap_dhgroups": [ 00:20:15.330 "null", 00:20:15.330 "ffdhe2048", 00:20:15.330 "ffdhe3072", 00:20:15.330 "ffdhe4096", 00:20:15.330 "ffdhe6144", 00:20:15.330 "ffdhe8192" 00:20:15.330 ] 00:20:15.330 } 00:20:15.330 }, 00:20:15.330 { 00:20:15.330 "method": "nvmf_set_max_subsystems", 00:20:15.330 "params": { 00:20:15.330 "max_subsystems": 1024 00:20:15.330 } 00:20:15.330 }, 00:20:15.330 { 00:20:15.330 "method": "nvmf_set_crdt", 00:20:15.330 "params": { 00:20:15.330 "crdt1": 0, 00:20:15.330 "crdt2": 0, 00:20:15.330 "crdt3": 0 00:20:15.330 } 00:20:15.330 } 00:20:15.330 ] 00:20:15.330 }, 00:20:15.330 { 00:20:15.330 "subsystem": "iscsi", 00:20:15.330 "config": [ 00:20:15.330 { 00:20:15.330 "method": "iscsi_set_options", 00:20:15.330 "params": { 00:20:15.330 "node_base": "iqn.2016-06.io.spdk", 00:20:15.330 "max_sessions": 128, 00:20:15.330 "max_connections_per_session": 2, 00:20:15.330 "max_queue_depth": 64, 00:20:15.330 "default_time2wait": 2, 00:20:15.330 "default_time2retain": 20, 00:20:15.330 "first_burst_length": 8192, 00:20:15.330 "immediate_data": true, 00:20:15.330 "allow_duplicated_isid": false, 00:20:15.330 "error_recovery_level": 0, 00:20:15.330 "nop_timeout": 60, 00:20:15.330 "nop_in_interval": 30, 00:20:15.330 "disable_chap": false, 00:20:15.330 "require_chap": false, 00:20:15.330 "mutual_chap": false, 00:20:15.330 "chap_group": 0, 00:20:15.330 "max_large_datain_per_connection": 64, 00:20:15.330 "max_r2t_per_connection": 4, 00:20:15.330 "pdu_pool_size": 36864, 00:20:15.330 "immediate_data_pool_size": 16384, 00:20:15.330 "data_out_pool_size": 2048 00:20:15.330 } 00:20:15.330 } 00:20:15.330 ] 00:20:15.330 } 00:20:15.330 ] 00:20:15.330 }' 00:20:15.330 21:51:22 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 76570 00:20:15.330 21:51:22 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 76570 ']' 00:20:15.330 21:51:22 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 76570 00:20:15.330 21:51:22 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:20:15.330 21:51:22 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:15.330 21:51:22 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76570 00:20:15.330 21:51:22 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:15.330 21:51:22 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:15.330 21:51:22 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76570' 00:20:15.330 killing process with pid 76570 00:20:15.330 21:51:22 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 76570 00:20:15.330 21:51:22 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 76570 00:20:16.707 [2024-12-10 21:51:24.324471] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:20:16.707 [2024-12-10 21:51:24.364170] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:16.707 [2024-12-10 21:51:24.364299] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:20:16.707 [2024-12-10 21:51:24.374089] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:16.707 [2024-12-10 21:51:24.374144] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:20:16.707 [2024-12-10 21:51:24.374161] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:20:16.707 [2024-12-10 21:51:24.374189] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:16.707 [2024-12-10 21:51:24.374355] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:18.615 21:51:26 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=76640 00:20:18.615 21:51:26 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 76640 00:20:18.615 21:51:26 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 76640 ']' 00:20:18.615 21:51:26 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.615 21:51:26 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:18.615 21:51:26 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.615 21:51:26 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:18.615 21:51:26 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:18.615 21:51:26 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:20:18.615 21:51:26 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:20:18.615 "subsystems": [ 00:20:18.615 { 00:20:18.615 "subsystem": "fsdev", 00:20:18.615 "config": [ 00:20:18.615 { 00:20:18.615 "method": "fsdev_set_opts", 00:20:18.615 "params": { 00:20:18.615 "fsdev_io_pool_size": 65535, 00:20:18.615 "fsdev_io_cache_size": 256 00:20:18.615 } 00:20:18.615 } 00:20:18.615 ] 00:20:18.615 }, 00:20:18.615 { 00:20:18.615 "subsystem": "keyring", 00:20:18.615 "config": [] 00:20:18.615 }, 00:20:18.615 { 00:20:18.615 "subsystem": "iobuf", 00:20:18.615 "config": [ 00:20:18.615 { 00:20:18.615 "method": "iobuf_set_options", 00:20:18.615 "params": { 00:20:18.615 "small_pool_count": 8192, 00:20:18.615 "large_pool_count": 1024, 00:20:18.615 "small_bufsize": 8192, 00:20:18.615 "large_bufsize": 135168, 00:20:18.615 "enable_numa": false 00:20:18.615 } 00:20:18.615 } 00:20:18.615 ] 00:20:18.615 }, 00:20:18.615 { 00:20:18.615 "subsystem": "sock", 00:20:18.615 "config": [ 00:20:18.615 { 00:20:18.615 "method": "sock_set_default_impl", 00:20:18.615 "params": { 00:20:18.615 "impl_name": "posix" 00:20:18.615 } 00:20:18.615 }, 00:20:18.615 { 00:20:18.615 "method": "sock_impl_set_options", 00:20:18.615 "params": { 00:20:18.615 "impl_name": "ssl", 00:20:18.615 "recv_buf_size": 4096, 00:20:18.615 "send_buf_size": 4096, 00:20:18.615 "enable_recv_pipe": true, 00:20:18.615 "enable_quickack": false, 00:20:18.615 "enable_placement_id": 0, 00:20:18.615 "enable_zerocopy_send_server": true, 00:20:18.615 "enable_zerocopy_send_client": false, 00:20:18.615 "zerocopy_threshold": 0, 00:20:18.615 "tls_version": 0, 00:20:18.615 "enable_ktls": false 00:20:18.615 } 00:20:18.615 }, 00:20:18.615 { 00:20:18.615 "method": "sock_impl_set_options", 00:20:18.615 "params": { 00:20:18.615 "impl_name": "posix", 00:20:18.615 "recv_buf_size": 2097152, 00:20:18.615 "send_buf_size": 2097152, 00:20:18.615 "enable_recv_pipe": true, 00:20:18.615 "enable_quickack": false, 00:20:18.615 "enable_placement_id": 0, 00:20:18.615 "enable_zerocopy_send_server": true, 00:20:18.615 "enable_zerocopy_send_client": false, 00:20:18.615 "zerocopy_threshold": 0, 00:20:18.615 "tls_version": 0, 00:20:18.615 "enable_ktls": false 00:20:18.615 } 00:20:18.615 } 00:20:18.615 ] 00:20:18.615 }, 00:20:18.615 { 00:20:18.615 "subsystem": "vmd", 00:20:18.615 "config": [] 00:20:18.615 }, 00:20:18.615 { 00:20:18.615 "subsystem": "accel", 00:20:18.615 "config": [ 00:20:18.615 { 00:20:18.615 "method": "accel_set_options", 00:20:18.615 "params": { 00:20:18.615 "small_cache_size": 128, 00:20:18.615 "large_cache_size": 16, 00:20:18.615 "task_count": 2048, 00:20:18.615 "sequence_count": 2048, 00:20:18.615 "buf_count": 2048 00:20:18.615 } 00:20:18.615 } 00:20:18.615 ] 00:20:18.615 }, 00:20:18.615 { 00:20:18.615 "subsystem": "bdev", 00:20:18.615 "config": [ 00:20:18.615 { 00:20:18.615 "method": "bdev_set_options", 00:20:18.615 "params": { 00:20:18.615 "bdev_io_pool_size": 65535, 00:20:18.615 "bdev_io_cache_size": 256, 00:20:18.616 "bdev_auto_examine": true, 00:20:18.616 "iobuf_small_cache_size": 128, 00:20:18.616 "iobuf_large_cache_size": 16 00:20:18.616 } 00:20:18.616 }, 00:20:18.616 { 00:20:18.616 "method": "bdev_raid_set_options", 00:20:18.616 "params": { 00:20:18.616 "process_window_size_kb": 1024, 00:20:18.616 "process_max_bandwidth_mb_sec": 0 00:20:18.616 } 00:20:18.616 }, 00:20:18.616 { 00:20:18.616 "method": "bdev_iscsi_set_options", 00:20:18.616 "params": { 00:20:18.616 "timeout_sec": 30 00:20:18.616 } 00:20:18.616 }, 00:20:18.616 { 00:20:18.616 "method": "bdev_nvme_set_options", 00:20:18.616 "params": { 00:20:18.616 "action_on_timeout": "none", 00:20:18.616 "timeout_us": 0, 00:20:18.616 "timeout_admin_us": 0, 00:20:18.616 "keep_alive_timeout_ms": 10000, 00:20:18.616 "arbitration_burst": 0, 00:20:18.616 "low_priority_weight": 0, 00:20:18.616 "medium_priority_weight": 0, 00:20:18.616 "high_priority_weight": 0, 00:20:18.616 "nvme_adminq_poll_period_us": 10000, 00:20:18.616 "nvme_ioq_poll_period_us": 0, 00:20:18.616 "io_queue_requests": 0, 00:20:18.616 "delay_cmd_submit": true, 00:20:18.616 "transport_retry_count": 4, 00:20:18.616 "bdev_retry_count": 3, 00:20:18.616 "transport_ack_timeout": 0, 00:20:18.616 "ctrlr_loss_timeout_sec": 0, 00:20:18.616 "reconnect_delay_sec": 0, 00:20:18.616 "fast_io_fail_timeout_sec": 0, 00:20:18.616 "disable_auto_failback": false, 00:20:18.616 "generate_uuids": false, 00:20:18.616 "transport_tos": 0, 00:20:18.616 "nvme_error_stat": false, 00:20:18.616 "rdma_srq_size": 0, 00:20:18.616 "io_path_stat": false, 00:20:18.616 "allow_accel_sequence": false, 00:20:18.616 "rdma_max_cq_size": 0, 00:20:18.616 "rdma_cm_event_timeout_ms": 0, 00:20:18.616 "dhchap_digests": [ 00:20:18.616 "sha256", 00:20:18.616 "sha384", 00:20:18.616 "sha512" 00:20:18.616 ], 00:20:18.616 "dhchap_dhgroups": [ 00:20:18.616 "null", 00:20:18.616 "ffdhe2048", 00:20:18.616 "ffdhe3072", 00:20:18.616 "ffdhe4096", 00:20:18.616 "ffdhe6144", 00:20:18.616 "ffdhe8192" 00:20:18.616 ], 00:20:18.616 "rdma_umr_per_io": false 00:20:18.616 } 00:20:18.616 }, 00:20:18.616 { 00:20:18.616 "method": "bdev_nvme_set_hotplug", 00:20:18.616 "params": { 00:20:18.616 "period_us": 100000, 00:20:18.616 "enable": false 00:20:18.616 } 00:20:18.616 }, 00:20:18.616 { 00:20:18.616 "method": "bdev_malloc_create", 00:20:18.616 "params": { 00:20:18.616 "name": "malloc0", 00:20:18.616 "num_blocks": 8192, 00:20:18.616 "block_size": 4096, 00:20:18.616 "physical_block_size": 4096, 00:20:18.616 "uuid": "bed5bd50-261e-485a-805b-d1df5721783a", 00:20:18.616 "optimal_io_boundary": 0, 00:20:18.616 "md_size": 0, 00:20:18.616 "dif_type": 0, 00:20:18.616 "dif_is_head_of_md": false, 00:20:18.616 "dif_pi_format": 0 00:20:18.616 } 00:20:18.616 }, 00:20:18.616 { 00:20:18.616 "method": "bdev_wait_for_examine" 00:20:18.616 } 00:20:18.616 ] 00:20:18.616 }, 00:20:18.616 { 00:20:18.616 "subsystem": "scsi", 00:20:18.616 "config": null 00:20:18.616 }, 00:20:18.616 { 00:20:18.616 "subsystem": "scheduler", 00:20:18.616 "config": [ 00:20:18.616 { 00:20:18.616 "method": "framework_set_scheduler", 00:20:18.616 "params": { 00:20:18.616 "name": "static" 00:20:18.616 } 00:20:18.616 } 00:20:18.616 ] 00:20:18.616 }, 00:20:18.616 { 00:20:18.616 "subsystem": "vhost_scsi", 00:20:18.616 "config": [] 00:20:18.616 }, 00:20:18.616 { 00:20:18.616 "subsystem": "vhost_blk", 00:20:18.616 "config": [] 00:20:18.616 }, 00:20:18.616 { 00:20:18.616 "subsystem": "ublk", 00:20:18.616 "config": [ 00:20:18.616 { 00:20:18.616 "method": "ublk_create_target", 00:20:18.616 "params": { 00:20:18.616 "cpumask": "1" 00:20:18.616 } 00:20:18.616 }, 00:20:18.616 { 00:20:18.616 "method": "ublk_start_disk", 00:20:18.616 "params": { 00:20:18.616 "bdev_name": "malloc0", 00:20:18.616 "ublk_id": 0, 00:20:18.616 "num_queues": 1, 00:20:18.616 "queue_depth": 128 00:20:18.616 } 00:20:18.616 } 00:20:18.616 ] 00:20:18.616 }, 00:20:18.616 { 00:20:18.616 "subsystem": "nbd", 00:20:18.616 "config": [] 00:20:18.616 }, 00:20:18.616 { 00:20:18.616 "subsystem": "nvmf", 00:20:18.616 "config": [ 00:20:18.616 { 00:20:18.616 "method": "nvmf_set_config", 00:20:18.616 "params": { 00:20:18.616 "discovery_filter": "match_any", 00:20:18.616 "admin_cmd_passthru": { 00:20:18.616 "identify_ctrlr": false 00:20:18.616 }, 00:20:18.616 "dhchap_digests": [ 00:20:18.616 "sha256", 00:20:18.616 "sha384", 00:20:18.616 "sha512" 00:20:18.616 ], 00:20:18.616 "dhchap_dhgroups": [ 00:20:18.616 "null", 00:20:18.616 "ffdhe2048", 00:20:18.616 "ffdhe3072", 00:20:18.616 "ffdhe4096", 00:20:18.616 "ffdhe6144", 00:20:18.616 "ffdhe8192" 00:20:18.616 ] 00:20:18.616 } 00:20:18.616 }, 00:20:18.616 { 00:20:18.616 "method": "nvmf_set_max_subsystems", 00:20:18.616 "params": { 00:20:18.616 "max_subsystems": 1024 00:20:18.616 } 00:20:18.616 }, 00:20:18.616 { 00:20:18.616 "method": "nvmf_set_crdt", 00:20:18.616 "params": { 00:20:18.616 "crdt1": 0, 00:20:18.616 "crdt2": 0, 00:20:18.616 "crdt3": 0 00:20:18.616 } 00:20:18.616 } 00:20:18.616 ] 00:20:18.616 }, 00:20:18.616 { 00:20:18.616 "subsystem": "iscsi", 00:20:18.616 "config": [ 00:20:18.616 { 00:20:18.616 "method": "iscsi_set_options", 00:20:18.616 "params": { 00:20:18.616 "node_base": "iqn.2016-06.io.spdk", 00:20:18.616 "max_sessions": 128, 00:20:18.616 "max_connections_per_session": 2, 00:20:18.616 "max_queue_depth": 64, 00:20:18.616 "default_time2wait": 2, 00:20:18.616 "default_time2retain": 20, 00:20:18.616 "first_burst_length": 8192, 00:20:18.616 "immediate_data": true, 00:20:18.616 "allow_duplicated_isid": false, 00:20:18.616 "error_recovery_level": 0, 00:20:18.616 "nop_timeout": 60, 00:20:18.616 "nop_in_interval": 30, 00:20:18.616 "disable_chap": false, 00:20:18.616 "require_chap": false, 00:20:18.616 "mutual_chap": false, 00:20:18.616 "chap_group": 0, 00:20:18.616 "max_large_datain_per_connection": 64, 00:20:18.616 "max_r2t_per_connection": 4, 00:20:18.616 "pdu_pool_size": 36864, 00:20:18.616 "immediate_data_pool_size": 16384, 00:20:18.616 "data_out_pool_size": 2048 00:20:18.616 } 00:20:18.616 } 00:20:18.616 ] 00:20:18.616 } 00:20:18.616 ] 00:20:18.616 }' 00:20:18.875 [2024-12-10 21:51:26.417253] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:20:18.875 [2024-12-10 21:51:26.417401] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76640 ] 00:20:18.875 [2024-12-10 21:51:26.597211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.133 [2024-12-10 21:51:26.722286] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.537 [2024-12-10 21:51:27.878066] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:20.537 [2024-12-10 21:51:27.879281] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:20.537 [2024-12-10 21:51:27.886228] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:20:20.537 [2024-12-10 21:51:27.886337] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:20:20.537 [2024-12-10 21:51:27.886350] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:20:20.537 [2024-12-10 21:51:27.886358] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:20:20.537 [2024-12-10 21:51:27.895188] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:20.537 [2024-12-10 21:51:27.895211] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:20.537 [2024-12-10 21:51:27.902077] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:20.537 [2024-12-10 21:51:27.902173] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:20:20.537 [2024-12-10 21:51:27.919070] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:20:20.537 21:51:27 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:20.537 21:51:27 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:20:20.537 21:51:27 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:20:20.537 21:51:27 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:20:20.537 21:51:27 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.537 21:51:27 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:20.537 21:51:27 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.538 21:51:28 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:20:20.538 21:51:28 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:20:20.538 21:51:28 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 76640 00:20:20.538 21:51:28 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 76640 ']' 00:20:20.538 21:51:28 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 76640 00:20:20.538 21:51:28 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:20:20.538 21:51:28 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:20.538 21:51:28 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76640 00:20:20.538 21:51:28 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:20.538 21:51:28 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:20.538 killing process with pid 76640 00:20:20.538 21:51:28 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76640' 00:20:20.538 21:51:28 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 76640 00:20:20.538 21:51:28 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 76640 00:20:22.443 [2024-12-10 21:51:29.732995] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:20:22.443 [2024-12-10 21:51:29.783067] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:22.443 [2024-12-10 21:51:29.783211] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:20:22.443 [2024-12-10 21:51:29.792094] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:22.443 [2024-12-10 21:51:29.792145] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:20:22.443 [2024-12-10 21:51:29.792155] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:20:22.443 [2024-12-10 21:51:29.792180] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:22.443 [2024-12-10 21:51:29.792328] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:24.349 21:51:31 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:20:24.349 ************************************ 00:20:24.349 END TEST test_save_ublk_config 00:20:24.349 ************************************ 00:20:24.349 00:20:24.349 real 0m10.803s 00:20:24.349 user 0m8.012s 00:20:24.349 sys 0m3.488s 00:20:24.349 21:51:31 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:24.349 21:51:31 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:24.349 21:51:31 ublk -- ublk/ublk.sh@139 -- # spdk_pid=76727 00:20:24.349 21:51:31 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:20:24.349 21:51:31 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:24.349 21:51:31 ublk -- ublk/ublk.sh@141 -- # waitforlisten 76727 00:20:24.349 21:51:31 ublk -- common/autotest_common.sh@835 -- # '[' -z 76727 ']' 00:20:24.349 21:51:31 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.349 21:51:31 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:24.349 21:51:31 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.349 21:51:31 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:24.349 21:51:31 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:24.349 [2024-12-10 21:51:31.878603] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:20:24.349 [2024-12-10 21:51:31.878746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76727 ] 00:20:24.349 [2024-12-10 21:51:32.061723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:24.608 [2024-12-10 21:51:32.185841] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.608 [2024-12-10 21:51:32.185878] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:25.546 21:51:33 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:25.546 21:51:33 ublk -- common/autotest_common.sh@868 -- # return 0 00:20:25.546 21:51:33 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:20:25.546 21:51:33 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:25.546 21:51:33 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:25.546 21:51:33 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:25.546 ************************************ 00:20:25.546 START TEST test_create_ublk 00:20:25.546 ************************************ 00:20:25.546 21:51:33 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:20:25.546 21:51:33 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:20:25.546 21:51:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.546 21:51:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:25.546 [2024-12-10 21:51:33.174071] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:25.546 [2024-12-10 21:51:33.177357] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:25.546 21:51:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.546 21:51:33 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:20:25.546 21:51:33 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:20:25.546 21:51:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.546 21:51:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:25.805 21:51:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.805 21:51:33 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:20:25.805 21:51:33 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:20:25.805 21:51:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.805 21:51:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:25.805 [2024-12-10 21:51:33.464234] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:20:25.805 [2024-12-10 21:51:33.464686] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:20:25.805 [2024-12-10 21:51:33.464703] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:20:25.805 [2024-12-10 21:51:33.464711] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:20:25.805 [2024-12-10 21:51:33.473516] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:25.805 [2024-12-10 21:51:33.473542] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:25.805 [2024-12-10 21:51:33.480085] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:25.805 [2024-12-10 21:51:33.480702] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:20:25.805 [2024-12-10 21:51:33.503096] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:20:25.805 21:51:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.805 21:51:33 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:20:25.805 21:51:33 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:20:25.805 21:51:33 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:20:25.805 21:51:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.805 21:51:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:26.065 21:51:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.065 21:51:33 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:20:26.065 { 00:20:26.065 "ublk_device": "/dev/ublkb0", 00:20:26.065 "id": 0, 00:20:26.065 "queue_depth": 512, 00:20:26.065 "num_queues": 4, 00:20:26.065 "bdev_name": "Malloc0" 00:20:26.065 } 00:20:26.065 ]' 00:20:26.065 21:51:33 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:20:26.065 21:51:33 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:20:26.065 21:51:33 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:20:26.065 21:51:33 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:20:26.065 21:51:33 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:20:26.065 21:51:33 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:20:26.065 21:51:33 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:20:26.065 21:51:33 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:20:26.065 21:51:33 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:20:26.065 21:51:33 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:20:26.065 21:51:33 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:20:26.065 21:51:33 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:20:26.065 21:51:33 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:20:26.065 21:51:33 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:20:26.065 21:51:33 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:20:26.065 21:51:33 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:20:26.065 21:51:33 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:20:26.065 21:51:33 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:20:26.065 21:51:33 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:20:26.065 21:51:33 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:20:26.065 21:51:33 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:20:26.065 21:51:33 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:20:26.325 fio: verification read phase will never start because write phase uses all of runtime 00:20:26.325 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:20:26.325 fio-3.35 00:20:26.325 Starting 1 process 00:20:36.377 00:20:36.377 fio_test: (groupid=0, jobs=1): err= 0: pid=76785: Tue Dec 10 21:51:44 2024 00:20:36.377 write: IOPS=15.8k, BW=61.8MiB/s (64.8MB/s)(618MiB/10000msec); 0 zone resets 00:20:36.377 clat (usec): min=38, max=4031, avg=62.20, stdev=100.51 00:20:36.377 lat (usec): min=38, max=4032, avg=62.76, stdev=100.51 00:20:36.377 clat percentiles (usec): 00:20:36.377 | 1.00th=[ 43], 5.00th=[ 51], 10.00th=[ 53], 20.00th=[ 54], 00:20:36.377 | 30.00th=[ 55], 40.00th=[ 56], 50.00th=[ 57], 60.00th=[ 59], 00:20:36.377 | 70.00th=[ 62], 80.00th=[ 64], 90.00th=[ 67], 95.00th=[ 69], 00:20:36.377 | 99.00th=[ 81], 99.50th=[ 89], 99.90th=[ 2114], 99.95th=[ 2868], 00:20:36.377 | 99.99th=[ 3621] 00:20:36.377 bw ( KiB/s): min=60616, max=70115, per=99.94%, avg=63278.47, stdev=2775.58, samples=19 00:20:36.377 iops : min=15154, max=17528, avg=15819.58, stdev=693.79, samples=19 00:20:36.377 lat (usec) : 50=3.24%, 100=96.40%, 250=0.14%, 500=0.02%, 750=0.02% 00:20:36.377 lat (usec) : 1000=0.01% 00:20:36.377 lat (msec) : 2=0.06%, 4=0.11%, 10=0.01% 00:20:36.377 cpu : usr=3.56%, sys=11.63%, ctx=158289, majf=0, minf=796 00:20:36.377 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:36.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.377 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.377 issued rwts: total=0,158295,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.377 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:36.377 00:20:36.377 Run status group 0 (all jobs): 00:20:36.377 WRITE: bw=61.8MiB/s (64.8MB/s), 61.8MiB/s-61.8MiB/s (64.8MB/s-64.8MB/s), io=618MiB (648MB), run=10000-10000msec 00:20:36.377 00:20:36.377 Disk stats (read/write): 00:20:36.377 ublkb0: ios=0/156630, merge=0/0, ticks=0/8450, in_queue=8450, util=99.10% 00:20:36.377 21:51:44 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:20:36.377 21:51:44 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.377 21:51:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:36.377 [2024-12-10 21:51:44.042466] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:20:36.377 [2024-12-10 21:51:44.078114] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:36.377 [2024-12-10 21:51:44.078863] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:20:36.377 [2024-12-10 21:51:44.087128] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:36.377 [2024-12-10 21:51:44.087399] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:20:36.377 [2024-12-10 21:51:44.087413] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:20:36.377 21:51:44 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.377 21:51:44 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:20:36.377 21:51:44 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:20:36.377 21:51:44 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:20:36.377 21:51:44 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:36.377 21:51:44 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:36.377 21:51:44 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:36.377 21:51:44 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:36.377 21:51:44 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:20:36.377 21:51:44 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.377 21:51:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:36.636 [2024-12-10 21:51:44.110161] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:20:36.636 request: 00:20:36.636 { 00:20:36.636 "ublk_id": 0, 00:20:36.636 "method": "ublk_stop_disk", 00:20:36.636 "req_id": 1 00:20:36.636 } 00:20:36.636 Got JSON-RPC error response 00:20:36.636 response: 00:20:36.636 { 00:20:36.636 "code": -19, 00:20:36.636 "message": "No such device" 00:20:36.636 } 00:20:36.636 21:51:44 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:36.636 21:51:44 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:20:36.636 21:51:44 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:36.636 21:51:44 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:36.636 21:51:44 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:36.636 21:51:44 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:20:36.636 21:51:44 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.636 21:51:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:36.636 [2024-12-10 21:51:44.118166] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:36.636 [2024-12-10 21:51:44.126068] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:36.636 [2024-12-10 21:51:44.126105] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:20:36.636 21:51:44 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.636 21:51:44 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:36.636 21:51:44 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.636 21:51:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:37.202 21:51:44 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.202 21:51:44 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:20:37.202 21:51:44 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:20:37.202 21:51:44 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.202 21:51:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:37.202 21:51:44 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.202 21:51:44 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:20:37.202 21:51:44 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:20:37.202 21:51:44 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:20:37.202 21:51:44 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:20:37.202 21:51:44 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.202 21:51:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:37.202 21:51:44 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.202 21:51:44 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:20:37.202 21:51:44 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:20:37.460 21:51:44 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:20:37.460 00:20:37.461 real 0m11.797s 00:20:37.461 user 0m0.791s 00:20:37.461 sys 0m1.289s 00:20:37.461 21:51:44 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:37.461 ************************************ 00:20:37.461 21:51:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:37.461 END TEST test_create_ublk 00:20:37.461 ************************************ 00:20:37.461 21:51:45 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:20:37.461 21:51:45 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:37.461 21:51:45 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:37.461 21:51:45 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:37.461 ************************************ 00:20:37.461 START TEST test_create_multi_ublk 00:20:37.461 ************************************ 00:20:37.461 21:51:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:20:37.461 21:51:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:20:37.461 21:51:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.461 21:51:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:37.461 [2024-12-10 21:51:45.042066] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:37.461 [2024-12-10 21:51:45.044769] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:37.461 21:51:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.461 21:51:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:20:37.461 21:51:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:20:37.461 21:51:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:37.461 21:51:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:20:37.461 21:51:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.461 21:51:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:37.719 21:51:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.719 21:51:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:20:37.719 21:51:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:20:37.719 21:51:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.719 21:51:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:37.719 [2024-12-10 21:51:45.446250] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:20:37.719 [2024-12-10 21:51:45.446753] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:20:37.719 [2024-12-10 21:51:45.446771] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:20:37.719 [2024-12-10 21:51:45.446785] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:20:37.978 [2024-12-10 21:51:45.470076] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:37.978 [2024-12-10 21:51:45.470108] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:37.978 [2024-12-10 21:51:45.482083] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:37.978 [2024-12-10 21:51:45.482738] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:20:37.978 [2024-12-10 21:51:45.522088] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:20:37.978 21:51:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.978 21:51:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:20:37.978 21:51:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:37.978 21:51:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:20:37.978 21:51:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.978 21:51:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:38.237 21:51:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.237 21:51:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:20:38.237 21:51:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:20:38.237 21:51:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.237 21:51:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:38.237 [2024-12-10 21:51:45.908225] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:20:38.237 [2024-12-10 21:51:45.908679] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:20:38.237 [2024-12-10 21:51:45.908698] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:20:38.237 [2024-12-10 21:51:45.908706] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:20:38.237 [2024-12-10 21:51:45.916122] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:38.237 [2024-12-10 21:51:45.916149] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:38.237 [2024-12-10 21:51:45.924085] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:38.237 [2024-12-10 21:51:45.924804] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:20:38.237 [2024-12-10 21:51:45.933135] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:20:38.237 21:51:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.237 21:51:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:20:38.237 21:51:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:38.237 21:51:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:20:38.237 21:51:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.237 21:51:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:38.496 21:51:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.496 21:51:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:20:38.496 21:51:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:20:38.496 21:51:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.496 21:51:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:38.756 [2024-12-10 21:51:46.228201] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:20:38.756 [2024-12-10 21:51:46.228649] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:20:38.756 [2024-12-10 21:51:46.228667] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:20:38.756 [2024-12-10 21:51:46.228679] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:20:38.756 [2024-12-10 21:51:46.232589] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:38.756 [2024-12-10 21:51:46.232618] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:38.756 [2024-12-10 21:51:46.243102] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:38.756 [2024-12-10 21:51:46.243732] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:20:38.756 [2024-12-10 21:51:46.246612] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:20:38.756 21:51:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.756 21:51:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:20:38.756 21:51:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:38.756 21:51:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:20:38.756 21:51:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.756 21:51:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:39.015 21:51:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.015 21:51:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:20:39.015 21:51:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:20:39.015 21:51:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.015 21:51:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:39.015 [2024-12-10 21:51:46.520238] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:20:39.015 [2024-12-10 21:51:46.520709] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:20:39.015 [2024-12-10 21:51:46.520730] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:20:39.015 [2024-12-10 21:51:46.520738] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:20:39.015 [2024-12-10 21:51:46.528099] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:39.015 [2024-12-10 21:51:46.528123] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:39.015 [2024-12-10 21:51:46.536099] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:39.015 [2024-12-10 21:51:46.536726] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:20:39.015 [2024-12-10 21:51:46.545115] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:20:39.015 21:51:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.015 21:51:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:20:39.015 21:51:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:20:39.015 21:51:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.015 21:51:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:39.015 21:51:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.016 21:51:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:20:39.016 { 00:20:39.016 "ublk_device": "/dev/ublkb0", 00:20:39.016 "id": 0, 00:20:39.016 "queue_depth": 512, 00:20:39.016 "num_queues": 4, 00:20:39.016 "bdev_name": "Malloc0" 00:20:39.016 }, 00:20:39.016 { 00:20:39.016 "ublk_device": "/dev/ublkb1", 00:20:39.016 "id": 1, 00:20:39.016 "queue_depth": 512, 00:20:39.016 "num_queues": 4, 00:20:39.016 "bdev_name": "Malloc1" 00:20:39.016 }, 00:20:39.016 { 00:20:39.016 "ublk_device": "/dev/ublkb2", 00:20:39.016 "id": 2, 00:20:39.016 "queue_depth": 512, 00:20:39.016 "num_queues": 4, 00:20:39.016 "bdev_name": "Malloc2" 00:20:39.016 }, 00:20:39.016 { 00:20:39.016 "ublk_device": "/dev/ublkb3", 00:20:39.016 "id": 3, 00:20:39.016 "queue_depth": 512, 00:20:39.016 "num_queues": 4, 00:20:39.016 "bdev_name": "Malloc3" 00:20:39.016 } 00:20:39.016 ]' 00:20:39.016 21:51:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:20:39.016 21:51:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:39.016 21:51:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:20:39.016 21:51:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:20:39.016 21:51:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:20:39.016 21:51:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:20:39.016 21:51:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:20:39.016 21:51:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:20:39.016 21:51:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:20:39.275 21:51:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:20:39.275 21:51:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:20:39.275 21:51:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:20:39.275 21:51:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:39.275 21:51:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:20:39.275 21:51:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:20:39.275 21:51:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:20:39.275 21:51:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:20:39.275 21:51:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:20:39.275 21:51:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:20:39.275 21:51:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:20:39.275 21:51:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:20:39.275 21:51:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:20:39.534 21:51:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:20:39.534 21:51:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:39.534 21:51:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:20:39.534 21:51:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:20:39.534 21:51:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:20:39.534 21:51:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:20:39.534 21:51:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:20:39.534 21:51:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:20:39.534 21:51:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:20:39.534 21:51:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:20:39.534 21:51:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:20:39.535 21:51:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:20:39.535 21:51:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:39.535 21:51:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:20:39.794 21:51:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:20:39.794 21:51:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:20:39.794 21:51:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:20:39.794 21:51:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:20:39.794 21:51:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:20:39.794 21:51:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:20:39.794 21:51:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:20:39.794 21:51:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:20:39.794 21:51:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:20:39.794 21:51:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:20:39.794 21:51:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:20:39.794 21:51:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:39.794 21:51:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:20:39.794 21:51:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.794 21:51:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:39.794 [2024-12-10 21:51:47.460239] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:20:39.794 [2024-12-10 21:51:47.507154] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:39.794 [2024-12-10 21:51:47.508027] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:20:39.794 [2024-12-10 21:51:47.516124] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:39.794 [2024-12-10 21:51:47.516406] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:20:39.794 [2024-12-10 21:51:47.516434] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:20:40.053 21:51:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.053 21:51:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:40.053 21:51:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:20:40.053 21:51:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.053 21:51:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.053 [2024-12-10 21:51:47.531197] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:20:40.053 [2024-12-10 21:51:47.576116] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:40.053 [2024-12-10 21:51:47.576951] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:20:40.053 [2024-12-10 21:51:47.579241] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:40.053 [2024-12-10 21:51:47.579512] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:20:40.053 [2024-12-10 21:51:47.579529] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:20:40.053 21:51:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.054 21:51:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:40.054 21:51:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:20:40.054 21:51:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.054 21:51:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.054 [2024-12-10 21:51:47.590198] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:20:40.054 [2024-12-10 21:51:47.620604] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:40.054 [2024-12-10 21:51:47.621531] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:20:40.054 [2024-12-10 21:51:47.628092] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:40.054 [2024-12-10 21:51:47.628350] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:20:40.054 [2024-12-10 21:51:47.628362] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:20:40.054 21:51:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.054 21:51:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:40.054 21:51:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:20:40.054 21:51:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.054 21:51:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.054 [2024-12-10 21:51:47.644175] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:20:40.054 [2024-12-10 21:51:47.677514] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:40.054 [2024-12-10 21:51:47.678436] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:20:40.054 [2024-12-10 21:51:47.683170] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:40.054 [2024-12-10 21:51:47.683433] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:20:40.054 [2024-12-10 21:51:47.683445] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:20:40.054 21:51:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.054 21:51:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:20:40.313 [2024-12-10 21:51:47.883161] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:40.313 [2024-12-10 21:51:47.891073] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:40.313 [2024-12-10 21:51:47.891117] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:20:40.313 21:51:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:20:40.313 21:51:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:40.313 21:51:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:40.313 21:51:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.313 21:51:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:41.250 21:51:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.250 21:51:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:41.250 21:51:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:41.250 21:51:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.250 21:51:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:41.509 21:51:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.509 21:51:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:41.509 21:51:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:20:41.509 21:51:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.509 21:51:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:41.768 21:51:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.768 21:51:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:41.768 21:51:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:20:41.768 21:51:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.768 21:51:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:42.027 21:51:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.027 21:51:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:20:42.027 21:51:49 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:20:42.027 21:51:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.028 21:51:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:42.028 21:51:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.028 21:51:49 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:20:42.028 21:51:49 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:20:42.287 21:51:49 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:20:42.287 21:51:49 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:20:42.287 21:51:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.287 21:51:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:42.287 21:51:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.287 21:51:49 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:20:42.287 21:51:49 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:20:42.287 ************************************ 00:20:42.287 END TEST test_create_multi_ublk 00:20:42.287 ************************************ 00:20:42.287 21:51:49 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:20:42.287 00:20:42.287 real 0m4.792s 00:20:42.287 user 0m1.033s 00:20:42.287 sys 0m0.228s 00:20:42.287 21:51:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:42.287 21:51:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:42.287 21:51:49 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:20:42.287 21:51:49 ublk -- ublk/ublk.sh@147 -- # cleanup 00:20:42.287 21:51:49 ublk -- ublk/ublk.sh@130 -- # killprocess 76727 00:20:42.287 21:51:49 ublk -- common/autotest_common.sh@954 -- # '[' -z 76727 ']' 00:20:42.287 21:51:49 ublk -- common/autotest_common.sh@958 -- # kill -0 76727 00:20:42.287 21:51:49 ublk -- common/autotest_common.sh@959 -- # uname 00:20:42.287 21:51:49 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:42.287 21:51:49 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76727 00:20:42.287 killing process with pid 76727 00:20:42.287 21:51:49 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:42.287 21:51:49 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:42.287 21:51:49 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76727' 00:20:42.287 21:51:49 ublk -- common/autotest_common.sh@973 -- # kill 76727 00:20:42.287 21:51:49 ublk -- common/autotest_common.sh@978 -- # wait 76727 00:20:43.668 [2024-12-10 21:51:51.127929] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:43.668 [2024-12-10 21:51:51.127989] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:45.046 00:20:45.046 real 0m31.752s 00:20:45.046 user 0m44.980s 00:20:45.046 sys 0m11.082s 00:20:45.046 21:51:52 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:45.046 21:51:52 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:45.046 ************************************ 00:20:45.046 END TEST ublk 00:20:45.046 ************************************ 00:20:45.046 21:51:52 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:20:45.046 21:51:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:45.046 21:51:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:45.046 21:51:52 -- common/autotest_common.sh@10 -- # set +x 00:20:45.046 ************************************ 00:20:45.046 START TEST ublk_recovery 00:20:45.046 ************************************ 00:20:45.046 21:51:52 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:20:45.046 * Looking for test storage... 00:20:45.046 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:20:45.046 21:51:52 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:45.046 21:51:52 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:20:45.046 21:51:52 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:45.046 21:51:52 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:45.046 21:51:52 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:45.046 21:51:52 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:45.046 21:51:52 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:45.046 21:51:52 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:20:45.046 21:51:52 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:20:45.046 21:51:52 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:20:45.046 21:51:52 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:20:45.046 21:51:52 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:20:45.046 21:51:52 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:20:45.046 21:51:52 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:20:45.046 21:51:52 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:45.046 21:51:52 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:20:45.046 21:51:52 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:20:45.046 21:51:52 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:45.046 21:51:52 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:45.046 21:51:52 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:20:45.046 21:51:52 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:20:45.046 21:51:52 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:45.046 21:51:52 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:20:45.046 21:51:52 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:20:45.046 21:51:52 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:20:45.046 21:51:52 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:20:45.046 21:51:52 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:45.046 21:51:52 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:20:45.046 21:51:52 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:20:45.046 21:51:52 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:45.046 21:51:52 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:45.046 21:51:52 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:20:45.046 21:51:52 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:45.046 21:51:52 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:45.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.046 --rc genhtml_branch_coverage=1 00:20:45.046 --rc genhtml_function_coverage=1 00:20:45.046 --rc genhtml_legend=1 00:20:45.046 --rc geninfo_all_blocks=1 00:20:45.046 --rc geninfo_unexecuted_blocks=1 00:20:45.046 00:20:45.046 ' 00:20:45.046 21:51:52 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:45.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.046 --rc genhtml_branch_coverage=1 00:20:45.046 --rc genhtml_function_coverage=1 00:20:45.046 --rc genhtml_legend=1 00:20:45.046 --rc geninfo_all_blocks=1 00:20:45.046 --rc geninfo_unexecuted_blocks=1 00:20:45.046 00:20:45.046 ' 00:20:45.046 21:51:52 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:45.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.046 --rc genhtml_branch_coverage=1 00:20:45.046 --rc genhtml_function_coverage=1 00:20:45.046 --rc genhtml_legend=1 00:20:45.046 --rc geninfo_all_blocks=1 00:20:45.046 --rc geninfo_unexecuted_blocks=1 00:20:45.046 00:20:45.046 ' 00:20:45.046 21:51:52 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:45.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.046 --rc genhtml_branch_coverage=1 00:20:45.046 --rc genhtml_function_coverage=1 00:20:45.046 --rc genhtml_legend=1 00:20:45.046 --rc geninfo_all_blocks=1 00:20:45.046 --rc geninfo_unexecuted_blocks=1 00:20:45.046 00:20:45.046 ' 00:20:45.046 21:51:52 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:20:45.046 21:51:52 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:20:45.046 21:51:52 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:20:45.046 21:51:52 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:20:45.046 21:51:52 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:20:45.046 21:51:52 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:20:45.046 21:51:52 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:20:45.046 21:51:52 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:20:45.046 21:51:52 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:20:45.046 21:51:52 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:20:45.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.046 21:51:52 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=77165 00:20:45.046 21:51:52 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:45.046 21:51:52 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 77165 00:20:45.046 21:51:52 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 77165 ']' 00:20:45.046 21:51:52 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.046 21:51:52 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:45.046 21:51:52 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:20:45.046 21:51:52 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.046 21:51:52 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:45.046 21:51:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:45.305 [2024-12-10 21:51:52.858526] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:20:45.305 [2024-12-10 21:51:52.858860] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77165 ] 00:20:45.565 [2024-12-10 21:51:53.039920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:45.565 [2024-12-10 21:51:53.159544] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.565 [2024-12-10 21:51:53.159583] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.504 21:51:54 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:46.504 21:51:54 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:20:46.504 21:51:54 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:20:46.504 21:51:54 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.504 21:51:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:46.504 [2024-12-10 21:51:54.131075] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:46.504 [2024-12-10 21:51:54.133985] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:46.504 21:51:54 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.504 21:51:54 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:20:46.504 21:51:54 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.504 21:51:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:46.763 malloc0 00:20:46.763 21:51:54 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.763 21:51:54 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:20:46.763 21:51:54 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.763 21:51:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:46.763 [2024-12-10 21:51:54.274237] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:20:46.763 [2024-12-10 21:51:54.274374] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:20:46.763 [2024-12-10 21:51:54.274389] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:20:46.763 [2024-12-10 21:51:54.274398] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:20:46.763 [2024-12-10 21:51:54.283212] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:46.763 [2024-12-10 21:51:54.283236] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:46.763 [2024-12-10 21:51:54.290089] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:46.763 [2024-12-10 21:51:54.290253] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:20:46.763 [2024-12-10 21:51:54.305076] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:20:46.763 1 00:20:46.763 21:51:54 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.763 21:51:54 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:20:47.700 21:51:55 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=77201 00:20:47.700 21:51:55 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:20:47.700 21:51:55 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:20:47.959 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:47.959 fio-3.35 00:20:47.959 Starting 1 process 00:20:53.233 21:52:00 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 77165 00:20:53.233 21:52:00 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:20:58.543 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 77165 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:20:58.543 21:52:05 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=77313 00:20:58.543 21:52:05 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:58.543 21:52:05 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 77313 00:20:58.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.543 21:52:05 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 77313 ']' 00:20:58.543 21:52:05 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.543 21:52:05 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:58.543 21:52:05 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.543 21:52:05 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:20:58.543 21:52:05 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:58.543 21:52:05 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:58.543 [2024-12-10 21:52:05.448278] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:20:58.543 [2024-12-10 21:52:05.448417] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77313 ] 00:20:58.543 [2024-12-10 21:52:05.632679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:58.543 [2024-12-10 21:52:05.752690] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.543 [2024-12-10 21:52:05.752728] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.110 21:52:06 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:59.110 21:52:06 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:20:59.110 21:52:06 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:20:59.110 21:52:06 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.110 21:52:06 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:59.110 [2024-12-10 21:52:06.759070] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:59.110 [2024-12-10 21:52:06.761836] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:59.110 21:52:06 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.110 21:52:06 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:20:59.110 21:52:06 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.110 21:52:06 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:59.368 malloc0 00:20:59.368 21:52:06 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.368 21:52:06 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:20:59.368 21:52:06 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.368 21:52:06 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:59.368 [2024-12-10 21:52:06.909242] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:20:59.368 [2024-12-10 21:52:06.909290] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:20:59.368 [2024-12-10 21:52:06.909302] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:20:59.368 [2024-12-10 21:52:06.917115] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:20:59.368 [2024-12-10 21:52:06.917143] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:20:59.368 1 00:20:59.368 21:52:06 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.368 21:52:06 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 77201 00:21:00.305 [2024-12-10 21:52:07.915557] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:21:00.305 [2024-12-10 21:52:07.921100] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:21:00.305 [2024-12-10 21:52:07.921119] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:21:01.243 [2024-12-10 21:52:08.923110] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:21:01.243 [2024-12-10 21:52:08.930111] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:21:01.243 [2024-12-10 21:52:08.930139] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:21:02.619 [2024-12-10 21:52:09.928549] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:21:02.619 [2024-12-10 21:52:09.936086] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:21:02.619 [2024-12-10 21:52:09.936109] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:21:02.619 [2024-12-10 21:52:09.936122] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:21:02.619 [2024-12-10 21:52:09.936228] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:21:24.565 [2024-12-10 21:52:30.744086] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:21:24.565 [2024-12-10 21:52:30.748181] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:21:24.565 [2024-12-10 21:52:30.758310] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:21:24.565 [2024-12-10 21:52:30.758342] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:21:51.123 00:21:51.123 fio_test: (groupid=0, jobs=1): err= 0: pid=77204: Tue Dec 10 21:52:55 2024 00:21:51.123 read: IOPS=12.7k, BW=49.6MiB/s (52.0MB/s)(2975MiB/60002msec) 00:21:51.123 slat (usec): min=2, max=334, avg= 6.95, stdev= 2.25 00:21:51.123 clat (usec): min=1048, max=30442k, avg=4598.90, stdev=256308.40 00:21:51.123 lat (usec): min=1055, max=30442k, avg=4605.85, stdev=256308.40 00:21:51.123 clat percentiles (usec): 00:21:51.123 | 1.00th=[ 1926], 5.00th=[ 2114], 10.00th=[ 2147], 20.00th=[ 2212], 00:21:51.123 | 30.00th=[ 2245], 40.00th=[ 2278], 50.00th=[ 2278], 60.00th=[ 2311], 00:21:51.123 | 70.00th=[ 2343], 80.00th=[ 2376], 90.00th=[ 2802], 95.00th=[ 3720], 00:21:51.123 | 99.00th=[ 5276], 99.50th=[ 5800], 99.90th=[ 7177], 99.95th=[ 7832], 00:21:51.123 | 99.99th=[12780] 00:21:51.123 bw ( KiB/s): min=33216, max=107824, per=100.00%, avg=101689.85, stdev=12676.05, samples=59 00:21:51.123 iops : min= 8304, max=26956, avg=25422.44, stdev=3169.01, samples=59 00:21:51.123 write: IOPS=12.7k, BW=49.5MiB/s (51.9MB/s)(2969MiB/60002msec); 0 zone resets 00:21:51.123 slat (usec): min=2, max=416, avg= 6.97, stdev= 2.40 00:21:51.123 clat (usec): min=1024, max=30443k, avg=5481.72, stdev=300321.16 00:21:51.123 lat (usec): min=1031, max=30443k, avg=5488.69, stdev=300321.15 00:21:51.123 clat percentiles (usec): 00:21:51.123 | 1.00th=[ 1926], 5.00th=[ 2073], 10.00th=[ 2212], 20.00th=[ 2311], 00:21:51.123 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2409], 00:21:51.123 | 70.00th=[ 2442], 80.00th=[ 2474], 90.00th=[ 2769], 95.00th=[ 3720], 00:21:51.123 | 99.00th=[ 5276], 99.50th=[ 5932], 99.90th=[ 7242], 99.95th=[ 8160], 00:21:51.123 | 99.99th=[13042] 00:21:51.123 bw ( KiB/s): min=34760, max=106800, per=100.00%, avg=101490.22, stdev=12376.30, samples=59 00:21:51.123 iops : min= 8690, max=26700, avg=25372.51, stdev=3094.06, samples=59 00:21:51.123 lat (msec) : 2=2.26%, 4=94.00%, 10=3.72%, 20=0.01%, >=2000=0.01% 00:21:51.123 cpu : usr=6.33%, sys=17.62%, ctx=65544, majf=0, minf=14 00:21:51.123 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:21:51.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:51.123 issued rwts: total=761496,760082,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.123 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:51.123 00:21:51.123 Run status group 0 (all jobs): 00:21:51.123 READ: bw=49.6MiB/s (52.0MB/s), 49.6MiB/s-49.6MiB/s (52.0MB/s-52.0MB/s), io=2975MiB (3119MB), run=60002-60002msec 00:21:51.124 WRITE: bw=49.5MiB/s (51.9MB/s), 49.5MiB/s-49.5MiB/s (51.9MB/s-51.9MB/s), io=2969MiB (3113MB), run=60002-60002msec 00:21:51.124 00:21:51.124 Disk stats (read/write): 00:21:51.124 ublkb1: ios=758581/757213, merge=0/0, ticks=3437373/4028008, in_queue=7465381, util=99.92% 00:21:51.124 21:52:55 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:21:51.124 21:52:55 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.124 21:52:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.124 [2024-12-10 21:52:55.600786] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:21:51.124 [2024-12-10 21:52:55.633189] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:51.124 [2024-12-10 21:52:55.633409] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:21:51.124 [2024-12-10 21:52:55.640080] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:51.124 [2024-12-10 21:52:55.640264] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:21:51.124 [2024-12-10 21:52:55.640277] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:21:51.124 21:52:55 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.124 21:52:55 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:21:51.124 21:52:55 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.124 21:52:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.124 [2024-12-10 21:52:55.655215] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:51.124 [2024-12-10 21:52:55.662095] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:51.124 [2024-12-10 21:52:55.662132] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:21:51.124 21:52:55 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.124 21:52:55 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:21:51.124 21:52:55 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:21:51.124 21:52:55 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 77313 00:21:51.124 21:52:55 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 77313 ']' 00:21:51.124 21:52:55 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 77313 00:21:51.124 21:52:55 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:21:51.124 21:52:55 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:51.124 21:52:55 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77313 00:21:51.124 21:52:55 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:51.124 21:52:55 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:51.124 killing process with pid 77313 00:21:51.124 21:52:55 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77313' 00:21:51.124 21:52:55 ublk_recovery -- common/autotest_common.sh@973 -- # kill 77313 00:21:51.124 21:52:55 ublk_recovery -- common/autotest_common.sh@978 -- # wait 77313 00:21:51.124 [2024-12-10 21:52:57.374096] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:51.124 [2024-12-10 21:52:57.374191] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:51.124 ************************************ 00:21:51.124 END TEST ublk_recovery 00:21:51.124 ************************************ 00:21:51.124 00:21:51.124 real 1m6.318s 00:21:51.124 user 1m52.212s 00:21:51.124 sys 0m24.012s 00:21:51.124 21:52:58 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:51.124 21:52:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.382 21:52:58 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:21:51.382 21:52:58 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:21:51.382 21:52:58 -- spdk/autotest.sh@260 -- # timing_exit lib 00:21:51.382 21:52:58 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:51.382 21:52:58 -- common/autotest_common.sh@10 -- # set +x 00:21:51.382 21:52:58 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:21:51.382 21:52:58 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:21:51.382 21:52:58 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:21:51.382 21:52:58 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:21:51.382 21:52:58 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:21:51.382 21:52:58 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:21:51.382 21:52:58 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:21:51.383 21:52:58 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:21:51.383 21:52:58 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:21:51.383 21:52:58 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:21:51.383 21:52:58 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:21:51.383 21:52:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:51.383 21:52:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:51.383 21:52:58 -- common/autotest_common.sh@10 -- # set +x 00:21:51.383 ************************************ 00:21:51.383 START TEST ftl 00:21:51.383 ************************************ 00:21:51.383 21:52:58 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:21:51.383 * Looking for test storage... 00:21:51.383 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:51.383 21:52:59 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:51.383 21:52:59 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:21:51.383 21:52:59 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:51.641 21:52:59 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:51.641 21:52:59 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:51.641 21:52:59 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:51.641 21:52:59 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:51.641 21:52:59 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:21:51.641 21:52:59 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:21:51.641 21:52:59 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:21:51.641 21:52:59 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:21:51.641 21:52:59 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:21:51.641 21:52:59 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:21:51.641 21:52:59 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:21:51.641 21:52:59 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:51.641 21:52:59 ftl -- scripts/common.sh@344 -- # case "$op" in 00:21:51.641 21:52:59 ftl -- scripts/common.sh@345 -- # : 1 00:21:51.641 21:52:59 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:51.641 21:52:59 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:51.641 21:52:59 ftl -- scripts/common.sh@365 -- # decimal 1 00:21:51.641 21:52:59 ftl -- scripts/common.sh@353 -- # local d=1 00:21:51.641 21:52:59 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:51.641 21:52:59 ftl -- scripts/common.sh@355 -- # echo 1 00:21:51.641 21:52:59 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:21:51.641 21:52:59 ftl -- scripts/common.sh@366 -- # decimal 2 00:21:51.641 21:52:59 ftl -- scripts/common.sh@353 -- # local d=2 00:21:51.641 21:52:59 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:51.641 21:52:59 ftl -- scripts/common.sh@355 -- # echo 2 00:21:51.641 21:52:59 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:21:51.641 21:52:59 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:51.641 21:52:59 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:51.641 21:52:59 ftl -- scripts/common.sh@368 -- # return 0 00:21:51.641 21:52:59 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:51.641 21:52:59 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:51.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.641 --rc genhtml_branch_coverage=1 00:21:51.641 --rc genhtml_function_coverage=1 00:21:51.641 --rc genhtml_legend=1 00:21:51.641 --rc geninfo_all_blocks=1 00:21:51.641 --rc geninfo_unexecuted_blocks=1 00:21:51.641 00:21:51.641 ' 00:21:51.642 21:52:59 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:51.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.642 --rc genhtml_branch_coverage=1 00:21:51.642 --rc genhtml_function_coverage=1 00:21:51.642 --rc genhtml_legend=1 00:21:51.642 --rc geninfo_all_blocks=1 00:21:51.642 --rc geninfo_unexecuted_blocks=1 00:21:51.642 00:21:51.642 ' 00:21:51.642 21:52:59 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:51.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.642 --rc genhtml_branch_coverage=1 00:21:51.642 --rc genhtml_function_coverage=1 00:21:51.642 --rc genhtml_legend=1 00:21:51.642 --rc geninfo_all_blocks=1 00:21:51.642 --rc geninfo_unexecuted_blocks=1 00:21:51.642 00:21:51.642 ' 00:21:51.642 21:52:59 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:51.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.642 --rc genhtml_branch_coverage=1 00:21:51.642 --rc genhtml_function_coverage=1 00:21:51.642 --rc genhtml_legend=1 00:21:51.642 --rc geninfo_all_blocks=1 00:21:51.642 --rc geninfo_unexecuted_blocks=1 00:21:51.642 00:21:51.642 ' 00:21:51.642 21:52:59 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:51.642 21:52:59 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:21:51.642 21:52:59 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:51.642 21:52:59 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:51.642 21:52:59 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:51.642 21:52:59 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:51.642 21:52:59 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:51.642 21:52:59 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:51.642 21:52:59 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:51.642 21:52:59 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:51.642 21:52:59 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:51.642 21:52:59 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:51.642 21:52:59 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:51.642 21:52:59 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:51.642 21:52:59 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:51.642 21:52:59 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:51.642 21:52:59 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:51.642 21:52:59 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:51.642 21:52:59 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:51.642 21:52:59 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:51.642 21:52:59 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:51.642 21:52:59 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:51.642 21:52:59 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:51.642 21:52:59 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:51.642 21:52:59 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:51.642 21:52:59 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:51.642 21:52:59 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:51.642 21:52:59 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:51.642 21:52:59 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:51.642 21:52:59 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:51.642 21:52:59 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:21:51.642 21:52:59 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:21:51.642 21:52:59 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:21:51.642 21:52:59 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:21:51.642 21:52:59 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:52.209 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:52.469 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:52.469 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:52.469 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:52.469 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:52.469 21:53:00 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=78120 00:21:52.469 21:53:00 ftl -- ftl/ftl.sh@38 -- # waitforlisten 78120 00:21:52.469 21:53:00 ftl -- common/autotest_common.sh@835 -- # '[' -z 78120 ']' 00:21:52.469 21:53:00 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.469 21:53:00 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:52.469 21:53:00 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.469 21:53:00 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:52.469 21:53:00 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:52.469 21:53:00 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:21:52.469 [2024-12-10 21:53:00.151728] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:21:52.469 [2024-12-10 21:53:00.151857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78120 ] 00:21:52.728 [2024-12-10 21:53:00.334448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.728 [2024-12-10 21:53:00.451680] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.297 21:53:00 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:53.297 21:53:00 ftl -- common/autotest_common.sh@868 -- # return 0 00:21:53.297 21:53:00 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:21:53.556 21:53:01 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:21:54.492 21:53:02 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:21:54.492 21:53:02 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:55.059 21:53:02 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:21:55.059 21:53:02 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:21:55.059 21:53:02 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:21:55.318 21:53:02 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:21:55.318 21:53:02 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:21:55.318 21:53:02 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:21:55.318 21:53:02 ftl -- ftl/ftl.sh@50 -- # break 00:21:55.318 21:53:02 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:21:55.318 21:53:02 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:21:55.318 21:53:02 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:21:55.318 21:53:02 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:21:55.576 21:53:03 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:21:55.576 21:53:03 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:21:55.576 21:53:03 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:21:55.576 21:53:03 ftl -- ftl/ftl.sh@63 -- # break 00:21:55.576 21:53:03 ftl -- ftl/ftl.sh@66 -- # killprocess 78120 00:21:55.576 21:53:03 ftl -- common/autotest_common.sh@954 -- # '[' -z 78120 ']' 00:21:55.576 21:53:03 ftl -- common/autotest_common.sh@958 -- # kill -0 78120 00:21:55.576 21:53:03 ftl -- common/autotest_common.sh@959 -- # uname 00:21:55.576 21:53:03 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:55.576 21:53:03 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78120 00:21:55.576 21:53:03 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:55.576 21:53:03 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:55.576 killing process with pid 78120 00:21:55.577 21:53:03 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78120' 00:21:55.577 21:53:03 ftl -- common/autotest_common.sh@973 -- # kill 78120 00:21:55.577 21:53:03 ftl -- common/autotest_common.sh@978 -- # wait 78120 00:21:58.112 21:53:05 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:21:58.112 21:53:05 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:21:58.112 21:53:05 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:58.112 21:53:05 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:58.112 21:53:05 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:58.112 ************************************ 00:21:58.112 START TEST ftl_fio_basic 00:21:58.112 ************************************ 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:21:58.112 * Looking for test storage... 00:21:58.112 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:58.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.112 --rc genhtml_branch_coverage=1 00:21:58.112 --rc genhtml_function_coverage=1 00:21:58.112 --rc genhtml_legend=1 00:21:58.112 --rc geninfo_all_blocks=1 00:21:58.112 --rc geninfo_unexecuted_blocks=1 00:21:58.112 00:21:58.112 ' 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:58.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.112 --rc genhtml_branch_coverage=1 00:21:58.112 --rc genhtml_function_coverage=1 00:21:58.112 --rc genhtml_legend=1 00:21:58.112 --rc geninfo_all_blocks=1 00:21:58.112 --rc geninfo_unexecuted_blocks=1 00:21:58.112 00:21:58.112 ' 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:58.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.112 --rc genhtml_branch_coverage=1 00:21:58.112 --rc genhtml_function_coverage=1 00:21:58.112 --rc genhtml_legend=1 00:21:58.112 --rc geninfo_all_blocks=1 00:21:58.112 --rc geninfo_unexecuted_blocks=1 00:21:58.112 00:21:58.112 ' 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:58.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.112 --rc genhtml_branch_coverage=1 00:21:58.112 --rc genhtml_function_coverage=1 00:21:58.112 --rc genhtml_legend=1 00:21:58.112 --rc geninfo_all_blocks=1 00:21:58.112 --rc geninfo_unexecuted_blocks=1 00:21:58.112 00:21:58.112 ' 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:21:58.112 21:53:05 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:21:58.113 21:53:05 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:58.113 21:53:05 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:58.113 21:53:05 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:21:58.113 21:53:05 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=78263 00:21:58.113 21:53:05 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 78263 00:21:58.113 21:53:05 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:21:58.113 21:53:05 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 78263 ']' 00:21:58.113 21:53:05 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.113 21:53:05 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:58.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.113 21:53:05 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.113 21:53:05 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:58.113 21:53:05 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:58.113 [2024-12-10 21:53:05.780390] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:21:58.113 [2024-12-10 21:53:05.780516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78263 ] 00:21:58.372 [2024-12-10 21:53:05.964113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:58.372 [2024-12-10 21:53:06.086727] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:58.372 [2024-12-10 21:53:06.086860] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.372 [2024-12-10 21:53:06.086898] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:21:59.333 21:53:06 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:59.333 21:53:06 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:21:59.333 21:53:06 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:59.333 21:53:06 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:21:59.333 21:53:06 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:59.333 21:53:06 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:21:59.333 21:53:06 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:21:59.333 21:53:06 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:59.592 21:53:07 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:59.592 21:53:07 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:21:59.592 21:53:07 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:59.592 21:53:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:59.592 21:53:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:59.592 21:53:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:21:59.592 21:53:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:21:59.592 21:53:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:59.851 21:53:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:59.851 { 00:21:59.851 "name": "nvme0n1", 00:21:59.851 "aliases": [ 00:21:59.851 "2647cbd2-49f5-4e70-bec1-c0b0942daefa" 00:21:59.851 ], 00:21:59.851 "product_name": "NVMe disk", 00:21:59.851 "block_size": 4096, 00:21:59.851 "num_blocks": 1310720, 00:21:59.851 "uuid": "2647cbd2-49f5-4e70-bec1-c0b0942daefa", 00:21:59.851 "numa_id": -1, 00:21:59.851 "assigned_rate_limits": { 00:21:59.851 "rw_ios_per_sec": 0, 00:21:59.851 "rw_mbytes_per_sec": 0, 00:21:59.851 "r_mbytes_per_sec": 0, 00:21:59.851 "w_mbytes_per_sec": 0 00:21:59.851 }, 00:21:59.851 "claimed": false, 00:21:59.851 "zoned": false, 00:21:59.851 "supported_io_types": { 00:21:59.851 "read": true, 00:21:59.851 "write": true, 00:21:59.851 "unmap": true, 00:21:59.851 "flush": true, 00:21:59.851 "reset": true, 00:21:59.851 "nvme_admin": true, 00:21:59.851 "nvme_io": true, 00:21:59.851 "nvme_io_md": false, 00:21:59.851 "write_zeroes": true, 00:21:59.851 "zcopy": false, 00:21:59.851 "get_zone_info": false, 00:21:59.851 "zone_management": false, 00:21:59.851 "zone_append": false, 00:21:59.851 "compare": true, 00:21:59.851 "compare_and_write": false, 00:21:59.851 "abort": true, 00:21:59.851 "seek_hole": false, 00:21:59.851 "seek_data": false, 00:21:59.851 "copy": true, 00:21:59.851 "nvme_iov_md": false 00:21:59.851 }, 00:21:59.851 "driver_specific": { 00:21:59.851 "nvme": [ 00:21:59.851 { 00:21:59.851 "pci_address": "0000:00:11.0", 00:21:59.851 "trid": { 00:21:59.851 "trtype": "PCIe", 00:21:59.851 "traddr": "0000:00:11.0" 00:21:59.851 }, 00:21:59.851 "ctrlr_data": { 00:21:59.851 "cntlid": 0, 00:21:59.851 "vendor_id": "0x1b36", 00:21:59.851 "model_number": "QEMU NVMe Ctrl", 00:21:59.851 "serial_number": "12341", 00:21:59.851 "firmware_revision": "8.0.0", 00:21:59.851 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:59.851 "oacs": { 00:21:59.851 "security": 0, 00:21:59.851 "format": 1, 00:21:59.851 "firmware": 0, 00:21:59.851 "ns_manage": 1 00:21:59.851 }, 00:21:59.851 "multi_ctrlr": false, 00:21:59.851 "ana_reporting": false 00:21:59.851 }, 00:21:59.851 "vs": { 00:21:59.851 "nvme_version": "1.4" 00:21:59.851 }, 00:21:59.851 "ns_data": { 00:21:59.851 "id": 1, 00:21:59.851 "can_share": false 00:21:59.851 } 00:21:59.851 } 00:21:59.851 ], 00:21:59.851 "mp_policy": "active_passive" 00:21:59.851 } 00:21:59.851 } 00:21:59.851 ]' 00:21:59.851 21:53:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:59.851 21:53:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:21:59.851 21:53:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:59.851 21:53:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:21:59.851 21:53:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:21:59.851 21:53:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:21:59.851 21:53:07 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:21:59.851 21:53:07 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:59.851 21:53:07 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:21:59.852 21:53:07 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:59.852 21:53:07 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:00.110 21:53:07 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:22:00.110 21:53:07 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:00.369 21:53:07 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=e0f8a8a3-bfca-48e9-9582-426980e6fc31 00:22:00.369 21:53:07 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e0f8a8a3-bfca-48e9-9582-426980e6fc31 00:22:00.629 21:53:08 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=b959b779-0236-46c4-bac5-08a7fc5a6495 00:22:00.629 21:53:08 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b959b779-0236-46c4-bac5-08a7fc5a6495 00:22:00.629 21:53:08 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:22:00.629 21:53:08 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:00.629 21:53:08 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=b959b779-0236-46c4-bac5-08a7fc5a6495 00:22:00.629 21:53:08 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:22:00.629 21:53:08 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size b959b779-0236-46c4-bac5-08a7fc5a6495 00:22:00.629 21:53:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=b959b779-0236-46c4-bac5-08a7fc5a6495 00:22:00.629 21:53:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:00.629 21:53:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:22:00.629 21:53:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:22:00.629 21:53:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b959b779-0236-46c4-bac5-08a7fc5a6495 00:22:00.888 21:53:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:00.888 { 00:22:00.888 "name": "b959b779-0236-46c4-bac5-08a7fc5a6495", 00:22:00.888 "aliases": [ 00:22:00.888 "lvs/nvme0n1p0" 00:22:00.888 ], 00:22:00.888 "product_name": "Logical Volume", 00:22:00.888 "block_size": 4096, 00:22:00.888 "num_blocks": 26476544, 00:22:00.888 "uuid": "b959b779-0236-46c4-bac5-08a7fc5a6495", 00:22:00.888 "assigned_rate_limits": { 00:22:00.888 "rw_ios_per_sec": 0, 00:22:00.888 "rw_mbytes_per_sec": 0, 00:22:00.888 "r_mbytes_per_sec": 0, 00:22:00.888 "w_mbytes_per_sec": 0 00:22:00.888 }, 00:22:00.888 "claimed": false, 00:22:00.888 "zoned": false, 00:22:00.888 "supported_io_types": { 00:22:00.888 "read": true, 00:22:00.888 "write": true, 00:22:00.888 "unmap": true, 00:22:00.888 "flush": false, 00:22:00.888 "reset": true, 00:22:00.888 "nvme_admin": false, 00:22:00.888 "nvme_io": false, 00:22:00.888 "nvme_io_md": false, 00:22:00.888 "write_zeroes": true, 00:22:00.888 "zcopy": false, 00:22:00.888 "get_zone_info": false, 00:22:00.888 "zone_management": false, 00:22:00.888 "zone_append": false, 00:22:00.888 "compare": false, 00:22:00.888 "compare_and_write": false, 00:22:00.888 "abort": false, 00:22:00.888 "seek_hole": true, 00:22:00.888 "seek_data": true, 00:22:00.888 "copy": false, 00:22:00.888 "nvme_iov_md": false 00:22:00.888 }, 00:22:00.888 "driver_specific": { 00:22:00.888 "lvol": { 00:22:00.889 "lvol_store_uuid": "e0f8a8a3-bfca-48e9-9582-426980e6fc31", 00:22:00.889 "base_bdev": "nvme0n1", 00:22:00.889 "thin_provision": true, 00:22:00.889 "num_allocated_clusters": 0, 00:22:00.889 "snapshot": false, 00:22:00.889 "clone": false, 00:22:00.889 "esnap_clone": false 00:22:00.889 } 00:22:00.889 } 00:22:00.889 } 00:22:00.889 ]' 00:22:00.889 21:53:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:00.889 21:53:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:22:00.889 21:53:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:00.889 21:53:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:00.889 21:53:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:00.889 21:53:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:22:00.889 21:53:08 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:22:00.889 21:53:08 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:22:00.889 21:53:08 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:01.149 21:53:08 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:01.149 21:53:08 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:01.149 21:53:08 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size b959b779-0236-46c4-bac5-08a7fc5a6495 00:22:01.149 21:53:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=b959b779-0236-46c4-bac5-08a7fc5a6495 00:22:01.149 21:53:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:01.149 21:53:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:22:01.149 21:53:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:22:01.149 21:53:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b959b779-0236-46c4-bac5-08a7fc5a6495 00:22:01.408 21:53:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:01.408 { 00:22:01.408 "name": "b959b779-0236-46c4-bac5-08a7fc5a6495", 00:22:01.408 "aliases": [ 00:22:01.408 "lvs/nvme0n1p0" 00:22:01.408 ], 00:22:01.408 "product_name": "Logical Volume", 00:22:01.408 "block_size": 4096, 00:22:01.408 "num_blocks": 26476544, 00:22:01.408 "uuid": "b959b779-0236-46c4-bac5-08a7fc5a6495", 00:22:01.408 "assigned_rate_limits": { 00:22:01.408 "rw_ios_per_sec": 0, 00:22:01.408 "rw_mbytes_per_sec": 0, 00:22:01.408 "r_mbytes_per_sec": 0, 00:22:01.408 "w_mbytes_per_sec": 0 00:22:01.408 }, 00:22:01.408 "claimed": false, 00:22:01.408 "zoned": false, 00:22:01.408 "supported_io_types": { 00:22:01.408 "read": true, 00:22:01.408 "write": true, 00:22:01.408 "unmap": true, 00:22:01.408 "flush": false, 00:22:01.408 "reset": true, 00:22:01.408 "nvme_admin": false, 00:22:01.408 "nvme_io": false, 00:22:01.408 "nvme_io_md": false, 00:22:01.408 "write_zeroes": true, 00:22:01.408 "zcopy": false, 00:22:01.408 "get_zone_info": false, 00:22:01.408 "zone_management": false, 00:22:01.408 "zone_append": false, 00:22:01.408 "compare": false, 00:22:01.408 "compare_and_write": false, 00:22:01.408 "abort": false, 00:22:01.408 "seek_hole": true, 00:22:01.408 "seek_data": true, 00:22:01.408 "copy": false, 00:22:01.408 "nvme_iov_md": false 00:22:01.408 }, 00:22:01.408 "driver_specific": { 00:22:01.408 "lvol": { 00:22:01.408 "lvol_store_uuid": "e0f8a8a3-bfca-48e9-9582-426980e6fc31", 00:22:01.408 "base_bdev": "nvme0n1", 00:22:01.408 "thin_provision": true, 00:22:01.408 "num_allocated_clusters": 0, 00:22:01.408 "snapshot": false, 00:22:01.408 "clone": false, 00:22:01.408 "esnap_clone": false 00:22:01.408 } 00:22:01.408 } 00:22:01.408 } 00:22:01.408 ]' 00:22:01.408 21:53:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:01.408 21:53:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:22:01.408 21:53:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:01.408 21:53:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:01.408 21:53:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:01.408 21:53:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:22:01.408 21:53:09 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:22:01.408 21:53:09 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:01.667 21:53:09 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:22:01.667 21:53:09 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:22:01.667 21:53:09 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:22:01.667 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:22:01.667 21:53:09 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size b959b779-0236-46c4-bac5-08a7fc5a6495 00:22:01.667 21:53:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=b959b779-0236-46c4-bac5-08a7fc5a6495 00:22:01.667 21:53:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:01.667 21:53:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:22:01.667 21:53:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:22:01.667 21:53:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b959b779-0236-46c4-bac5-08a7fc5a6495 00:22:01.926 21:53:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:01.926 { 00:22:01.926 "name": "b959b779-0236-46c4-bac5-08a7fc5a6495", 00:22:01.926 "aliases": [ 00:22:01.926 "lvs/nvme0n1p0" 00:22:01.926 ], 00:22:01.926 "product_name": "Logical Volume", 00:22:01.926 "block_size": 4096, 00:22:01.926 "num_blocks": 26476544, 00:22:01.926 "uuid": "b959b779-0236-46c4-bac5-08a7fc5a6495", 00:22:01.926 "assigned_rate_limits": { 00:22:01.926 "rw_ios_per_sec": 0, 00:22:01.926 "rw_mbytes_per_sec": 0, 00:22:01.926 "r_mbytes_per_sec": 0, 00:22:01.926 "w_mbytes_per_sec": 0 00:22:01.926 }, 00:22:01.926 "claimed": false, 00:22:01.926 "zoned": false, 00:22:01.926 "supported_io_types": { 00:22:01.926 "read": true, 00:22:01.926 "write": true, 00:22:01.926 "unmap": true, 00:22:01.926 "flush": false, 00:22:01.926 "reset": true, 00:22:01.926 "nvme_admin": false, 00:22:01.926 "nvme_io": false, 00:22:01.926 "nvme_io_md": false, 00:22:01.927 "write_zeroes": true, 00:22:01.927 "zcopy": false, 00:22:01.927 "get_zone_info": false, 00:22:01.927 "zone_management": false, 00:22:01.927 "zone_append": false, 00:22:01.927 "compare": false, 00:22:01.927 "compare_and_write": false, 00:22:01.927 "abort": false, 00:22:01.927 "seek_hole": true, 00:22:01.927 "seek_data": true, 00:22:01.927 "copy": false, 00:22:01.927 "nvme_iov_md": false 00:22:01.927 }, 00:22:01.927 "driver_specific": { 00:22:01.927 "lvol": { 00:22:01.927 "lvol_store_uuid": "e0f8a8a3-bfca-48e9-9582-426980e6fc31", 00:22:01.927 "base_bdev": "nvme0n1", 00:22:01.927 "thin_provision": true, 00:22:01.927 "num_allocated_clusters": 0, 00:22:01.927 "snapshot": false, 00:22:01.927 "clone": false, 00:22:01.927 "esnap_clone": false 00:22:01.927 } 00:22:01.927 } 00:22:01.927 } 00:22:01.927 ]' 00:22:01.927 21:53:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:01.927 21:53:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:22:01.927 21:53:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:01.927 21:53:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:01.927 21:53:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:01.927 21:53:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:22:01.927 21:53:09 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:22:01.927 21:53:09 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:22:01.927 21:53:09 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b959b779-0236-46c4-bac5-08a7fc5a6495 -c nvc0n1p0 --l2p_dram_limit 60 00:22:02.188 [2024-12-10 21:53:09.692712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.188 [2024-12-10 21:53:09.692764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:02.188 [2024-12-10 21:53:09.692785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:02.188 [2024-12-10 21:53:09.692795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.188 [2024-12-10 21:53:09.692890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.188 [2024-12-10 21:53:09.692908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:02.188 [2024-12-10 21:53:09.692924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:22:02.188 [2024-12-10 21:53:09.692934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.188 [2024-12-10 21:53:09.692998] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:02.188 [2024-12-10 21:53:09.694195] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:02.188 [2024-12-10 21:53:09.694439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.188 [2024-12-10 21:53:09.694458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:02.188 [2024-12-10 21:53:09.694473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.469 ms 00:22:02.188 [2024-12-10 21:53:09.694484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.188 [2024-12-10 21:53:09.694698] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 25f866dd-62b6-4f58-9270-4376170a1d7d 00:22:02.188 [2024-12-10 21:53:09.697132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.188 [2024-12-10 21:53:09.697175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:02.188 [2024-12-10 21:53:09.697187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:22:02.188 [2024-12-10 21:53:09.697200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.188 [2024-12-10 21:53:09.710594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.188 [2024-12-10 21:53:09.710627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:02.188 [2024-12-10 21:53:09.710642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.322 ms 00:22:02.188 [2024-12-10 21:53:09.710656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.188 [2024-12-10 21:53:09.710794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.188 [2024-12-10 21:53:09.710814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:02.188 [2024-12-10 21:53:09.710826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:22:02.188 [2024-12-10 21:53:09.710844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.188 [2024-12-10 21:53:09.710921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.188 [2024-12-10 21:53:09.710939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:02.188 [2024-12-10 21:53:09.710951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:02.188 [2024-12-10 21:53:09.710965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.188 [2024-12-10 21:53:09.711008] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:02.188 [2024-12-10 21:53:09.717093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.188 [2024-12-10 21:53:09.717124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:02.188 [2024-12-10 21:53:09.717141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.105 ms 00:22:02.188 [2024-12-10 21:53:09.717155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.188 [2024-12-10 21:53:09.717205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.188 [2024-12-10 21:53:09.717216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:02.188 [2024-12-10 21:53:09.717230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:02.188 [2024-12-10 21:53:09.717240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.188 [2024-12-10 21:53:09.717308] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:02.188 [2024-12-10 21:53:09.717505] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:02.188 [2024-12-10 21:53:09.717530] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:02.188 [2024-12-10 21:53:09.717544] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:02.188 [2024-12-10 21:53:09.717562] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:02.188 [2024-12-10 21:53:09.717574] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:02.188 [2024-12-10 21:53:09.717588] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:02.188 [2024-12-10 21:53:09.717598] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:02.188 [2024-12-10 21:53:09.717612] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:02.188 [2024-12-10 21:53:09.717622] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:02.188 [2024-12-10 21:53:09.717635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.188 [2024-12-10 21:53:09.717648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:02.188 [2024-12-10 21:53:09.717661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:22:02.188 [2024-12-10 21:53:09.717671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.188 [2024-12-10 21:53:09.717759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.188 [2024-12-10 21:53:09.717771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:02.188 [2024-12-10 21:53:09.717784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:22:02.188 [2024-12-10 21:53:09.717794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.188 [2024-12-10 21:53:09.717917] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:02.188 [2024-12-10 21:53:09.717930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:02.188 [2024-12-10 21:53:09.717950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:02.188 [2024-12-10 21:53:09.717960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.188 [2024-12-10 21:53:09.717975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:02.188 [2024-12-10 21:53:09.717985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:02.188 [2024-12-10 21:53:09.718001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:02.188 [2024-12-10 21:53:09.718013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:02.188 [2024-12-10 21:53:09.718028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:02.188 [2024-12-10 21:53:09.718038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:02.188 [2024-12-10 21:53:09.718067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:02.188 [2024-12-10 21:53:09.718079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:02.188 [2024-12-10 21:53:09.718093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:02.188 [2024-12-10 21:53:09.718102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:02.188 [2024-12-10 21:53:09.718115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:02.188 [2024-12-10 21:53:09.718123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.188 [2024-12-10 21:53:09.718138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:02.188 [2024-12-10 21:53:09.718147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:02.188 [2024-12-10 21:53:09.718158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.188 [2024-12-10 21:53:09.718167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:02.188 [2024-12-10 21:53:09.718180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:02.188 [2024-12-10 21:53:09.718189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.188 [2024-12-10 21:53:09.718202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:02.188 [2024-12-10 21:53:09.718210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:02.188 [2024-12-10 21:53:09.718223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.188 [2024-12-10 21:53:09.718231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:02.188 [2024-12-10 21:53:09.718243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:02.188 [2024-12-10 21:53:09.718252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.188 [2024-12-10 21:53:09.718279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:02.188 [2024-12-10 21:53:09.718288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:02.188 [2024-12-10 21:53:09.718300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.188 [2024-12-10 21:53:09.718308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:02.188 [2024-12-10 21:53:09.718326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:02.188 [2024-12-10 21:53:09.718358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:02.188 [2024-12-10 21:53:09.718371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:02.188 [2024-12-10 21:53:09.718396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:02.188 [2024-12-10 21:53:09.718408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:02.188 [2024-12-10 21:53:09.718417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:02.188 [2024-12-10 21:53:09.718430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:02.188 [2024-12-10 21:53:09.718441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.189 [2024-12-10 21:53:09.718454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:02.189 [2024-12-10 21:53:09.718464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:02.189 [2024-12-10 21:53:09.718476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.189 [2024-12-10 21:53:09.718484] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:02.189 [2024-12-10 21:53:09.718498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:02.189 [2024-12-10 21:53:09.718509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:02.189 [2024-12-10 21:53:09.718521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.189 [2024-12-10 21:53:09.718531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:02.189 [2024-12-10 21:53:09.718547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:02.189 [2024-12-10 21:53:09.718556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:02.189 [2024-12-10 21:53:09.718569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:02.189 [2024-12-10 21:53:09.718577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:02.189 [2024-12-10 21:53:09.718590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:02.189 [2024-12-10 21:53:09.718604] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:02.189 [2024-12-10 21:53:09.718625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:02.189 [2024-12-10 21:53:09.718638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:02.189 [2024-12-10 21:53:09.718654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:02.189 [2024-12-10 21:53:09.718665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:02.189 [2024-12-10 21:53:09.718683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:02.189 [2024-12-10 21:53:09.718694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:02.189 [2024-12-10 21:53:09.718710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:02.189 [2024-12-10 21:53:09.718720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:02.189 [2024-12-10 21:53:09.718736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:02.189 [2024-12-10 21:53:09.718747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:02.189 [2024-12-10 21:53:09.718768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:02.189 [2024-12-10 21:53:09.718779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:02.189 [2024-12-10 21:53:09.718796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:02.189 [2024-12-10 21:53:09.718807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:02.189 [2024-12-10 21:53:09.718822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:02.189 [2024-12-10 21:53:09.718832] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:02.189 [2024-12-10 21:53:09.718847] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:02.189 [2024-12-10 21:53:09.718863] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:02.189 [2024-12-10 21:53:09.718877] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:02.189 [2024-12-10 21:53:09.718889] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:02.189 [2024-12-10 21:53:09.718903] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:02.189 [2024-12-10 21:53:09.718917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.189 [2024-12-10 21:53:09.718932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:02.189 [2024-12-10 21:53:09.718943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.061 ms 00:22:02.189 [2024-12-10 21:53:09.718956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.189 [2024-12-10 21:53:09.719042] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:02.189 [2024-12-10 21:53:09.719079] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:06.382 [2024-12-10 21:53:13.476832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.382 [2024-12-10 21:53:13.476887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:06.382 [2024-12-10 21:53:13.476902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3763.890 ms 00:22:06.382 [2024-12-10 21:53:13.476916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.382 [2024-12-10 21:53:13.513894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.382 [2024-12-10 21:53:13.513948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:06.382 [2024-12-10 21:53:13.513963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.686 ms 00:22:06.382 [2024-12-10 21:53:13.513979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.382 [2024-12-10 21:53:13.514160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.382 [2024-12-10 21:53:13.514181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:06.382 [2024-12-10 21:53:13.514193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:22:06.382 [2024-12-10 21:53:13.514214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.382 [2024-12-10 21:53:13.593843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.382 [2024-12-10 21:53:13.593890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:06.382 [2024-12-10 21:53:13.593909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.676 ms 00:22:06.382 [2024-12-10 21:53:13.593924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.382 [2024-12-10 21:53:13.593988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.382 [2024-12-10 21:53:13.594003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:06.382 [2024-12-10 21:53:13.594015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:06.382 [2024-12-10 21:53:13.594027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.382 [2024-12-10 21:53:13.594936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.382 [2024-12-10 21:53:13.594964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:06.382 [2024-12-10 21:53:13.594976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.787 ms 00:22:06.382 [2024-12-10 21:53:13.594995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.382 [2024-12-10 21:53:13.595150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.382 [2024-12-10 21:53:13.595169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:06.382 [2024-12-10 21:53:13.595182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:22:06.382 [2024-12-10 21:53:13.595199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.382 [2024-12-10 21:53:13.618187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.382 [2024-12-10 21:53:13.618228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:06.382 [2024-12-10 21:53:13.618242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.977 ms 00:22:06.382 [2024-12-10 21:53:13.618257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.382 [2024-12-10 21:53:13.630611] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:06.382 [2024-12-10 21:53:13.648171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.382 [2024-12-10 21:53:13.648211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:06.382 [2024-12-10 21:53:13.648240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.788 ms 00:22:06.382 [2024-12-10 21:53:13.648254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.382 [2024-12-10 21:53:13.746866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.382 [2024-12-10 21:53:13.746904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:06.382 [2024-12-10 21:53:13.746932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.707 ms 00:22:06.382 [2024-12-10 21:53:13.746943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.382 [2024-12-10 21:53:13.747175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.382 [2024-12-10 21:53:13.747190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:06.382 [2024-12-10 21:53:13.747207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:22:06.382 [2024-12-10 21:53:13.747218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.382 [2024-12-10 21:53:13.782200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.382 [2024-12-10 21:53:13.782241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:06.382 [2024-12-10 21:53:13.782258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.960 ms 00:22:06.382 [2024-12-10 21:53:13.782269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.382 [2024-12-10 21:53:13.815628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.382 [2024-12-10 21:53:13.815665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:06.382 [2024-12-10 21:53:13.815682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.351 ms 00:22:06.382 [2024-12-10 21:53:13.815692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.382 [2024-12-10 21:53:13.816442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.382 [2024-12-10 21:53:13.816469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:06.382 [2024-12-10 21:53:13.816486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.691 ms 00:22:06.382 [2024-12-10 21:53:13.816496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.382 [2024-12-10 21:53:13.919376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.382 [2024-12-10 21:53:13.919413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:06.382 [2024-12-10 21:53:13.919435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.959 ms 00:22:06.382 [2024-12-10 21:53:13.919449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.382 [2024-12-10 21:53:13.955774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.382 [2024-12-10 21:53:13.955811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:06.382 [2024-12-10 21:53:13.955838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.265 ms 00:22:06.382 [2024-12-10 21:53:13.955849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.382 [2024-12-10 21:53:13.989905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.382 [2024-12-10 21:53:13.989942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:06.382 [2024-12-10 21:53:13.989959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.049 ms 00:22:06.382 [2024-12-10 21:53:13.989969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.382 [2024-12-10 21:53:14.025730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.382 [2024-12-10 21:53:14.025765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:06.382 [2024-12-10 21:53:14.025794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.760 ms 00:22:06.382 [2024-12-10 21:53:14.025804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.382 [2024-12-10 21:53:14.025864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.382 [2024-12-10 21:53:14.025876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:06.382 [2024-12-10 21:53:14.025898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:06.382 [2024-12-10 21:53:14.025909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.382 [2024-12-10 21:53:14.026077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.382 [2024-12-10 21:53:14.026096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:06.382 [2024-12-10 21:53:14.026110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:22:06.382 [2024-12-10 21:53:14.026120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.382 [2024-12-10 21:53:14.027610] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4341.402 ms, result 0 00:22:06.382 { 00:22:06.382 "name": "ftl0", 00:22:06.382 "uuid": "25f866dd-62b6-4f58-9270-4376170a1d7d" 00:22:06.382 } 00:22:06.382 21:53:14 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:22:06.382 21:53:14 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:22:06.382 21:53:14 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:06.382 21:53:14 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:22:06.382 21:53:14 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:06.382 21:53:14 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:06.382 21:53:14 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:22:06.642 21:53:14 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:22:06.901 [ 00:22:06.901 { 00:22:06.901 "name": "ftl0", 00:22:06.901 "aliases": [ 00:22:06.901 "25f866dd-62b6-4f58-9270-4376170a1d7d" 00:22:06.901 ], 00:22:06.901 "product_name": "FTL disk", 00:22:06.901 "block_size": 4096, 00:22:06.901 "num_blocks": 20971520, 00:22:06.901 "uuid": "25f866dd-62b6-4f58-9270-4376170a1d7d", 00:22:06.901 "assigned_rate_limits": { 00:22:06.901 "rw_ios_per_sec": 0, 00:22:06.901 "rw_mbytes_per_sec": 0, 00:22:06.901 "r_mbytes_per_sec": 0, 00:22:06.901 "w_mbytes_per_sec": 0 00:22:06.901 }, 00:22:06.901 "claimed": false, 00:22:06.901 "zoned": false, 00:22:06.901 "supported_io_types": { 00:22:06.901 "read": true, 00:22:06.901 "write": true, 00:22:06.901 "unmap": true, 00:22:06.901 "flush": true, 00:22:06.901 "reset": false, 00:22:06.901 "nvme_admin": false, 00:22:06.901 "nvme_io": false, 00:22:06.901 "nvme_io_md": false, 00:22:06.901 "write_zeroes": true, 00:22:06.901 "zcopy": false, 00:22:06.901 "get_zone_info": false, 00:22:06.901 "zone_management": false, 00:22:06.901 "zone_append": false, 00:22:06.901 "compare": false, 00:22:06.901 "compare_and_write": false, 00:22:06.901 "abort": false, 00:22:06.901 "seek_hole": false, 00:22:06.901 "seek_data": false, 00:22:06.901 "copy": false, 00:22:06.901 "nvme_iov_md": false 00:22:06.901 }, 00:22:06.901 "driver_specific": { 00:22:06.901 "ftl": { 00:22:06.901 "base_bdev": "b959b779-0236-46c4-bac5-08a7fc5a6495", 00:22:06.901 "cache": "nvc0n1p0" 00:22:06.901 } 00:22:06.901 } 00:22:06.901 } 00:22:06.901 ] 00:22:06.901 21:53:14 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:22:06.901 21:53:14 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:22:06.901 21:53:14 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:07.160 21:53:14 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:22:07.160 21:53:14 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:07.160 [2024-12-10 21:53:14.858650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.160 [2024-12-10 21:53:14.858704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:07.160 [2024-12-10 21:53:14.858722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:07.160 [2024-12-10 21:53:14.858737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.160 [2024-12-10 21:53:14.858791] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:07.160 [2024-12-10 21:53:14.863253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.160 [2024-12-10 21:53:14.863289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:07.160 [2024-12-10 21:53:14.863306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.443 ms 00:22:07.160 [2024-12-10 21:53:14.863316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.160 [2024-12-10 21:53:14.863883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.160 [2024-12-10 21:53:14.863904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:07.160 [2024-12-10 21:53:14.863919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:22:07.160 [2024-12-10 21:53:14.863930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.160 [2024-12-10 21:53:14.866271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.160 [2024-12-10 21:53:14.866300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:07.160 [2024-12-10 21:53:14.866315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.310 ms 00:22:07.160 [2024-12-10 21:53:14.866325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.160 [2024-12-10 21:53:14.871174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.160 [2024-12-10 21:53:14.871210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:07.160 [2024-12-10 21:53:14.871225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.803 ms 00:22:07.160 [2024-12-10 21:53:14.871235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.420 [2024-12-10 21:53:14.907013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.420 [2024-12-10 21:53:14.907070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:07.420 [2024-12-10 21:53:14.907106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.716 ms 00:22:07.420 [2024-12-10 21:53:14.907118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.420 [2024-12-10 21:53:14.929325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.420 [2024-12-10 21:53:14.929362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:07.420 [2024-12-10 21:53:14.929390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.177 ms 00:22:07.420 [2024-12-10 21:53:14.929401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.420 [2024-12-10 21:53:14.929654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.420 [2024-12-10 21:53:14.929670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:07.420 [2024-12-10 21:53:14.929695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:22:07.420 [2024-12-10 21:53:14.929705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.420 [2024-12-10 21:53:14.964404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.420 [2024-12-10 21:53:14.964438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:07.420 [2024-12-10 21:53:14.964453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.718 ms 00:22:07.420 [2024-12-10 21:53:14.964463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.420 [2024-12-10 21:53:14.999033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.420 [2024-12-10 21:53:14.999077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:07.420 [2024-12-10 21:53:14.999094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.566 ms 00:22:07.420 [2024-12-10 21:53:14.999105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.420 [2024-12-10 21:53:15.033946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.420 [2024-12-10 21:53:15.033982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:07.420 [2024-12-10 21:53:15.033999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.834 ms 00:22:07.420 [2024-12-10 21:53:15.034008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.420 [2024-12-10 21:53:15.068088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.420 [2024-12-10 21:53:15.068121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:07.420 [2024-12-10 21:53:15.068136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.937 ms 00:22:07.420 [2024-12-10 21:53:15.068146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.420 [2024-12-10 21:53:15.068223] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:07.420 [2024-12-10 21:53:15.068241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.068993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.069004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.069017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.069029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.069043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.069063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.069077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.069088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.069102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.069112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.069130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.069140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.069153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.069163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.069176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.069186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.069199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.069210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.069223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.069234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.069265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.069275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.069288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.069298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.069312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.069322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.069338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.069349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.069362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.069372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.069385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.069395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.069411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.069421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.069434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.069445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.069459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.069475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.069488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:07.420 [2024-12-10 21:53:15.069505] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:07.420 [2024-12-10 21:53:15.069518] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 25f866dd-62b6-4f58-9270-4376170a1d7d 00:22:07.420 [2024-12-10 21:53:15.069529] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:07.420 [2024-12-10 21:53:15.069544] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:07.420 [2024-12-10 21:53:15.069554] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:07.420 [2024-12-10 21:53:15.069570] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:07.420 [2024-12-10 21:53:15.069579] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:07.420 [2024-12-10 21:53:15.069592] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:07.420 [2024-12-10 21:53:15.069601] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:07.420 [2024-12-10 21:53:15.069613] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:07.420 [2024-12-10 21:53:15.069621] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:07.420 [2024-12-10 21:53:15.069633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.420 [2024-12-10 21:53:15.069645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:07.420 [2024-12-10 21:53:15.069658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.423 ms 00:22:07.420 [2024-12-10 21:53:15.069668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.420 [2024-12-10 21:53:15.088568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.420 [2024-12-10 21:53:15.088603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:07.420 [2024-12-10 21:53:15.088617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.853 ms 00:22:07.420 [2024-12-10 21:53:15.088627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.420 [2024-12-10 21:53:15.089186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.420 [2024-12-10 21:53:15.089200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:07.420 [2024-12-10 21:53:15.089214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:22:07.420 [2024-12-10 21:53:15.089223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.680 [2024-12-10 21:53:15.156019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.680 [2024-12-10 21:53:15.156065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:07.680 [2024-12-10 21:53:15.156082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.680 [2024-12-10 21:53:15.156092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.680 [2024-12-10 21:53:15.156173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.680 [2024-12-10 21:53:15.156184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:07.680 [2024-12-10 21:53:15.156198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.680 [2024-12-10 21:53:15.156208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.680 [2024-12-10 21:53:15.156331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.680 [2024-12-10 21:53:15.156349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:07.680 [2024-12-10 21:53:15.156362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.680 [2024-12-10 21:53:15.156372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.680 [2024-12-10 21:53:15.156414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.680 [2024-12-10 21:53:15.156426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:07.680 [2024-12-10 21:53:15.156439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.680 [2024-12-10 21:53:15.156449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.680 [2024-12-10 21:53:15.285157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.680 [2024-12-10 21:53:15.285212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:07.680 [2024-12-10 21:53:15.285230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.680 [2024-12-10 21:53:15.285241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.680 [2024-12-10 21:53:15.382365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.680 [2024-12-10 21:53:15.382417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:07.680 [2024-12-10 21:53:15.382441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.680 [2024-12-10 21:53:15.382452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.680 [2024-12-10 21:53:15.382595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.680 [2024-12-10 21:53:15.382610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:07.680 [2024-12-10 21:53:15.382628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.680 [2024-12-10 21:53:15.382638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.680 [2024-12-10 21:53:15.382754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.680 [2024-12-10 21:53:15.382767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:07.680 [2024-12-10 21:53:15.382781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.680 [2024-12-10 21:53:15.382791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.680 [2024-12-10 21:53:15.382925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.680 [2024-12-10 21:53:15.382940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:07.680 [2024-12-10 21:53:15.382953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.680 [2024-12-10 21:53:15.382969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.680 [2024-12-10 21:53:15.383036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.680 [2024-12-10 21:53:15.383064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:07.680 [2024-12-10 21:53:15.383079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.680 [2024-12-10 21:53:15.383089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.680 [2024-12-10 21:53:15.383162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.680 [2024-12-10 21:53:15.383174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:07.680 [2024-12-10 21:53:15.383187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.680 [2024-12-10 21:53:15.383197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.680 [2024-12-10 21:53:15.383275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:07.680 [2024-12-10 21:53:15.383289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:07.680 [2024-12-10 21:53:15.383303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:07.680 [2024-12-10 21:53:15.383316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.680 [2024-12-10 21:53:15.383542] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 525.700 ms, result 0 00:22:07.680 true 00:22:07.939 21:53:15 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 78263 00:22:07.939 21:53:15 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 78263 ']' 00:22:07.939 21:53:15 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 78263 00:22:07.939 21:53:15 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:22:07.939 21:53:15 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:07.939 21:53:15 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78263 00:22:07.939 killing process with pid 78263 00:22:07.939 21:53:15 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:07.939 21:53:15 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:07.939 21:53:15 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78263' 00:22:07.939 21:53:15 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 78263 00:22:07.939 21:53:15 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 78263 00:22:13.222 21:53:20 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:22:13.222 21:53:20 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:22:13.222 21:53:20 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:22:13.222 21:53:20 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:13.222 21:53:20 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:13.222 21:53:20 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:22:13.222 21:53:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:22:13.222 21:53:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:13.222 21:53:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:13.222 21:53:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:13.222 21:53:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:13.222 21:53:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:22:13.222 21:53:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:13.222 21:53:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:13.222 21:53:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:13.222 21:53:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:22:13.222 21:53:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:13.222 21:53:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:13.222 21:53:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:13.222 21:53:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:22:13.222 21:53:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:13.222 21:53:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:22:13.222 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:22:13.222 fio-3.35 00:22:13.222 Starting 1 thread 00:22:19.785 00:22:19.785 test: (groupid=0, jobs=1): err= 0: pid=78481: Tue Dec 10 21:53:26 2024 00:22:19.785 read: IOPS=806, BW=53.5MiB/s (56.1MB/s)(255MiB/4755msec) 00:22:19.785 slat (nsec): min=8071, max=39933, avg=11265.01, stdev=3122.15 00:22:19.786 clat (usec): min=384, max=1191, avg=553.73, stdev=54.68 00:22:19.786 lat (usec): min=395, max=1210, avg=565.00, stdev=55.47 00:22:19.786 clat percentiles (usec): 00:22:19.786 | 1.00th=[ 457], 5.00th=[ 474], 10.00th=[ 482], 20.00th=[ 498], 00:22:19.786 | 30.00th=[ 545], 40.00th=[ 553], 50.00th=[ 562], 60.00th=[ 570], 00:22:19.786 | 70.00th=[ 570], 80.00th=[ 578], 90.00th=[ 594], 95.00th=[ 627], 00:22:19.786 | 99.00th=[ 717], 99.50th=[ 775], 99.90th=[ 914], 99.95th=[ 1074], 00:22:19.786 | 99.99th=[ 1188] 00:22:19.786 write: IOPS=811, BW=53.9MiB/s (56.5MB/s)(256MiB/4750msec); 0 zone resets 00:22:19.786 slat (usec): min=17, max=116, avg=28.61, stdev= 5.39 00:22:19.786 clat (usec): min=467, max=1121, avg=630.87, stdev=69.81 00:22:19.786 lat (usec): min=505, max=1143, avg=659.48, stdev=70.20 00:22:19.786 clat percentiles (usec): 00:22:19.786 | 1.00th=[ 498], 5.00th=[ 562], 10.00th=[ 578], 20.00th=[ 586], 00:22:19.786 | 30.00th=[ 594], 40.00th=[ 603], 50.00th=[ 619], 60.00th=[ 652], 00:22:19.786 | 70.00th=[ 660], 80.00th=[ 668], 90.00th=[ 685], 95.00th=[ 701], 00:22:19.786 | 99.00th=[ 979], 99.50th=[ 1029], 99.90th=[ 1090], 99.95th=[ 1090], 00:22:19.786 | 99.99th=[ 1123] 00:22:19.786 bw ( KiB/s): min=52632, max=56848, per=99.97%, avg=55185.78, stdev=1268.12, samples=9 00:22:19.786 iops : min= 774, max= 836, avg=811.56, stdev=18.65, samples=9 00:22:19.786 lat (usec) : 500=11.07%, 750=87.31%, 1000=1.18% 00:22:19.786 lat (msec) : 2=0.44% 00:22:19.786 cpu : usr=99.10%, sys=0.17%, ctx=9, majf=0, minf=1167 00:22:19.786 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:19.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:19.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:19.786 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:19.786 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:19.786 00:22:19.786 Run status group 0 (all jobs): 00:22:19.786 READ: bw=53.5MiB/s (56.1MB/s), 53.5MiB/s-53.5MiB/s (56.1MB/s-56.1MB/s), io=255MiB (267MB), run=4755-4755msec 00:22:19.786 WRITE: bw=53.9MiB/s (56.5MB/s), 53.9MiB/s-53.9MiB/s (56.5MB/s-56.5MB/s), io=256MiB (269MB), run=4750-4750msec 00:22:20.723 ----------------------------------------------------- 00:22:20.723 Suppressions used: 00:22:20.723 count bytes template 00:22:20.723 1 5 /usr/src/fio/parse.c 00:22:20.723 1 8 libtcmalloc_minimal.so 00:22:20.723 1 904 libcrypto.so 00:22:20.723 ----------------------------------------------------- 00:22:20.723 00:22:20.723 21:53:28 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:22:20.723 21:53:28 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:20.723 21:53:28 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:20.723 21:53:28 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:22:20.723 21:53:28 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:22:20.723 21:53:28 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:20.723 21:53:28 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:20.723 21:53:28 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:22:20.723 21:53:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:22:20.723 21:53:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:20.723 21:53:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:20.723 21:53:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:20.723 21:53:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:20.723 21:53:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:22:20.723 21:53:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:20.723 21:53:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:20.723 21:53:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:20.723 21:53:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:22:20.723 21:53:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:20.723 21:53:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:20.723 21:53:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:20.723 21:53:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:22:20.723 21:53:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:20.723 21:53:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:22:20.983 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:22:20.983 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:22:20.983 fio-3.35 00:22:20.983 Starting 2 threads 00:22:53.094 00:22:53.094 first_half: (groupid=0, jobs=1): err= 0: pid=78595: Tue Dec 10 21:53:57 2024 00:22:53.094 read: IOPS=2424, BW=9699KiB/s (9931kB/s)(255MiB/26935msec) 00:22:53.094 slat (usec): min=3, max=264, avg=10.56, stdev= 4.45 00:22:53.094 clat (usec): min=1151, max=325726, avg=41387.38, stdev=21045.15 00:22:53.094 lat (usec): min=1167, max=325736, avg=41397.94, stdev=21045.81 00:22:53.094 clat percentiles (msec): 00:22:53.094 | 1.00th=[ 17], 5.00th=[ 32], 10.00th=[ 36], 20.00th=[ 36], 00:22:53.094 | 30.00th=[ 36], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:22:53.094 | 70.00th=[ 38], 80.00th=[ 42], 90.00th=[ 46], 95.00th=[ 61], 00:22:53.094 | 99.00th=[ 161], 99.50th=[ 194], 99.90th=[ 239], 99.95th=[ 271], 00:22:53.094 | 99.99th=[ 317] 00:22:53.094 write: IOPS=2915, BW=11.4MiB/s (11.9MB/s)(256MiB/22482msec); 0 zone resets 00:22:53.094 slat (usec): min=4, max=1222, avg=10.80, stdev=12.54 00:22:53.094 clat (usec): min=470, max=98138, avg=11325.94, stdev=18284.00 00:22:53.094 lat (usec): min=483, max=98144, avg=11336.74, stdev=18284.19 00:22:53.094 clat percentiles (usec): 00:22:53.094 | 1.00th=[ 1188], 5.00th=[ 1549], 10.00th=[ 1811], 20.00th=[ 2212], 00:22:53.094 | 30.00th=[ 3949], 40.00th=[ 5800], 50.00th=[ 6915], 60.00th=[ 8094], 00:22:53.094 | 70.00th=[ 9241], 80.00th=[11600], 90.00th=[14091], 95.00th=[45876], 00:22:53.094 | 99.00th=[91751], 99.50th=[93848], 99.90th=[95945], 99.95th=[96994], 00:22:53.094 | 99.99th=[96994] 00:22:53.094 bw ( KiB/s): min= 1488, max=40064, per=100.00%, avg=21839.13, stdev=9871.57, samples=24 00:22:53.094 iops : min= 372, max=10016, avg=5459.75, stdev=2467.88, samples=24 00:22:53.095 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.12% 00:22:53.095 lat (msec) : 2=7.59%, 4=7.57%, 10=21.78%, 20=9.79%, 50=46.29% 00:22:53.095 lat (msec) : 100=5.61%, 250=1.18%, 500=0.03% 00:22:53.095 cpu : usr=99.11%, sys=0.23%, ctx=65, majf=0, minf=5595 00:22:53.095 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:53.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:53.095 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:53.095 issued rwts: total=65308,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:53.095 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:53.095 second_half: (groupid=0, jobs=1): err= 0: pid=78596: Tue Dec 10 21:53:57 2024 00:22:53.095 read: IOPS=2410, BW=9642KiB/s (9873kB/s)(255MiB/27097msec) 00:22:53.095 slat (nsec): min=3464, max=51573, avg=8075.02, stdev=3695.02 00:22:53.095 clat (usec): min=1158, max=331029, avg=41107.70, stdev=24922.86 00:22:53.095 lat (usec): min=1165, max=331038, avg=41115.78, stdev=24923.71 00:22:53.095 clat percentiles (msec): 00:22:53.095 | 1.00th=[ 8], 5.00th=[ 32], 10.00th=[ 36], 20.00th=[ 36], 00:22:53.095 | 30.00th=[ 36], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:22:53.095 | 70.00th=[ 38], 80.00th=[ 40], 90.00th=[ 45], 95.00th=[ 58], 00:22:53.095 | 99.00th=[ 182], 99.50th=[ 207], 99.90th=[ 232], 99.95th=[ 251], 00:22:53.095 | 99.99th=[ 321] 00:22:53.095 write: IOPS=2658, BW=10.4MiB/s (10.9MB/s)(256MiB/24648msec); 0 zone resets 00:22:53.095 slat (usec): min=4, max=797, avg= 9.86, stdev= 6.99 00:22:53.095 clat (usec): min=440, max=99428, avg=11922.27, stdev=19734.32 00:22:53.095 lat (usec): min=456, max=99449, avg=11932.13, stdev=19734.83 00:22:53.095 clat percentiles (usec): 00:22:53.095 | 1.00th=[ 1090], 5.00th=[ 1369], 10.00th=[ 1582], 20.00th=[ 1860], 00:22:53.095 | 30.00th=[ 2180], 40.00th=[ 3752], 50.00th=[ 5473], 60.00th=[ 6980], 00:22:53.095 | 70.00th=[ 9241], 80.00th=[12518], 90.00th=[35390], 95.00th=[54264], 00:22:53.095 | 99.00th=[92799], 99.50th=[94897], 99.90th=[96994], 99.95th=[98042], 00:22:53.095 | 99.99th=[98042] 00:22:53.095 bw ( KiB/s): min= 328, max=56352, per=94.79%, avg=20162.77, stdev=16491.04, samples=26 00:22:53.095 iops : min= 82, max=14088, avg=5040.69, stdev=4122.76, samples=26 00:22:53.095 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.24% 00:22:53.095 lat (msec) : 2=12.37%, 4=8.43%, 10=16.72%, 20=8.08%, 50=48.72% 00:22:53.095 lat (msec) : 100=3.70%, 250=1.70%, 500=0.03% 00:22:53.095 cpu : usr=99.29%, sys=0.17%, ctx=55, majf=0, minf=5512 00:22:53.095 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:53.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:53.095 complete : 0=0.0%, 4=99.8%, 8=0.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:53.095 issued rwts: total=65315,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:53.095 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:53.095 00:22:53.095 Run status group 0 (all jobs): 00:22:53.095 READ: bw=18.8MiB/s (19.7MB/s), 9642KiB/s-9699KiB/s (9873kB/s-9931kB/s), io=510MiB (535MB), run=26935-27097msec 00:22:53.095 WRITE: bw=20.8MiB/s (21.8MB/s), 10.4MiB/s-11.4MiB/s (10.9MB/s-11.9MB/s), io=512MiB (537MB), run=22482-24648msec 00:22:53.095 ----------------------------------------------------- 00:22:53.095 Suppressions used: 00:22:53.095 count bytes template 00:22:53.095 2 10 /usr/src/fio/parse.c 00:22:53.095 4 384 /usr/src/fio/iolog.c 00:22:53.095 1 8 libtcmalloc_minimal.so 00:22:53.095 1 904 libcrypto.so 00:22:53.095 ----------------------------------------------------- 00:22:53.095 00:22:53.095 21:53:59 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:22:53.095 21:53:59 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:53.095 21:53:59 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:53.095 21:53:59 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:22:53.095 21:53:59 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:22:53.095 21:53:59 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:53.095 21:53:59 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:53.095 21:53:59 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:22:53.095 21:53:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:22:53.095 21:53:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:53.095 21:53:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:53.095 21:53:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:53.095 21:53:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:53.095 21:53:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:22:53.095 21:53:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:53.095 21:53:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:53.095 21:53:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:53.095 21:53:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:22:53.095 21:53:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:53.095 21:53:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:53.095 21:53:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:53.095 21:53:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:22:53.095 21:53:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:53.095 21:53:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:22:53.095 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:22:53.095 fio-3.35 00:22:53.095 Starting 1 thread 00:23:11.181 00:23:11.181 test: (groupid=0, jobs=1): err= 0: pid=78940: Tue Dec 10 21:54:17 2024 00:23:11.181 read: IOPS=6761, BW=26.4MiB/s (27.7MB/s)(255MiB/9643msec) 00:23:11.181 slat (nsec): min=3349, max=48390, avg=9654.40, stdev=4949.96 00:23:11.181 clat (usec): min=820, max=36450, avg=18915.59, stdev=1211.01 00:23:11.181 lat (usec): min=824, max=36465, avg=18925.25, stdev=1210.90 00:23:11.181 clat percentiles (usec): 00:23:11.181 | 1.00th=[17695], 5.00th=[17957], 10.00th=[18220], 20.00th=[18482], 00:23:11.181 | 30.00th=[18482], 40.00th=[18744], 50.00th=[18744], 60.00th=[19006], 00:23:11.181 | 70.00th=[19006], 80.00th=[19268], 90.00th=[19530], 95.00th=[20055], 00:23:11.181 | 99.00th=[24249], 99.50th=[27395], 99.90th=[30802], 99.95th=[32113], 00:23:11.181 | 99.99th=[35914] 00:23:11.181 write: IOPS=10.1k, BW=39.3MiB/s (41.2MB/s)(256MiB/6513msec); 0 zone resets 00:23:11.181 slat (usec): min=4, max=781, avg= 9.79, stdev=10.42 00:23:11.181 clat (usec): min=522, max=74235, avg=12660.65, stdev=16824.24 00:23:11.181 lat (usec): min=529, max=74244, avg=12670.44, stdev=16824.45 00:23:11.181 clat percentiles (usec): 00:23:11.181 | 1.00th=[ 1123], 5.00th=[ 1434], 10.00th=[ 1647], 20.00th=[ 1958], 00:23:11.181 | 30.00th=[ 2311], 40.00th=[ 3195], 50.00th=[ 6915], 60.00th=[ 8979], 00:23:11.181 | 70.00th=[10290], 80.00th=[12518], 90.00th=[46400], 95.00th=[54789], 00:23:11.181 | 99.00th=[62653], 99.50th=[65799], 99.90th=[68682], 99.95th=[70779], 00:23:11.181 | 99.99th=[71828] 00:23:11.181 bw ( KiB/s): min= 1008, max=68688, per=93.03%, avg=37442.43, stdev=15391.95, samples=14 00:23:11.181 iops : min= 252, max=17172, avg=9360.57, stdev=3847.96, samples=14 00:23:11.181 lat (usec) : 750=0.01%, 1000=0.16% 00:23:11.181 lat (msec) : 2=10.59%, 4=10.20%, 10=12.98%, 20=55.70%, 50=6.34% 00:23:11.181 lat (msec) : 100=4.03% 00:23:11.181 cpu : usr=98.77%, sys=0.41%, ctx=40, majf=0, minf=5563 00:23:11.181 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:23:11.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:11.181 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:11.181 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:11.181 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:11.181 00:23:11.181 Run status group 0 (all jobs): 00:23:11.181 READ: bw=26.4MiB/s (27.7MB/s), 26.4MiB/s-26.4MiB/s (27.7MB/s-27.7MB/s), io=255MiB (267MB), run=9643-9643msec 00:23:11.181 WRITE: bw=39.3MiB/s (41.2MB/s), 39.3MiB/s-39.3MiB/s (41.2MB/s-41.2MB/s), io=256MiB (268MB), run=6513-6513msec 00:23:12.119 ----------------------------------------------------- 00:23:12.119 Suppressions used: 00:23:12.119 count bytes template 00:23:12.119 1 5 /usr/src/fio/parse.c 00:23:12.119 2 192 /usr/src/fio/iolog.c 00:23:12.119 1 8 libtcmalloc_minimal.so 00:23:12.119 1 904 libcrypto.so 00:23:12.119 ----------------------------------------------------- 00:23:12.119 00:23:12.119 21:54:19 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:23:12.119 21:54:19 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:12.119 21:54:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:12.119 21:54:19 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:12.119 21:54:19 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:23:12.119 Remove shared memory files 00:23:12.119 21:54:19 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:12.119 21:54:19 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:23:12.119 21:54:19 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:23:12.119 21:54:19 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid59089 /dev/shm/spdk_tgt_trace.pid77165 00:23:12.119 21:54:19 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:12.119 21:54:19 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:23:12.119 ************************************ 00:23:12.119 END TEST ftl_fio_basic 00:23:12.119 ************************************ 00:23:12.119 00:23:12.119 real 1m14.222s 00:23:12.119 user 2m41.372s 00:23:12.119 sys 0m4.209s 00:23:12.119 21:54:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:12.119 21:54:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:12.119 21:54:19 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:23:12.119 21:54:19 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:12.119 21:54:19 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:12.119 21:54:19 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:12.119 ************************************ 00:23:12.119 START TEST ftl_bdevperf 00:23:12.119 ************************************ 00:23:12.119 21:54:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:23:12.119 * Looking for test storage... 00:23:12.379 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:12.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.379 --rc genhtml_branch_coverage=1 00:23:12.379 --rc genhtml_function_coverage=1 00:23:12.379 --rc genhtml_legend=1 00:23:12.379 --rc geninfo_all_blocks=1 00:23:12.379 --rc geninfo_unexecuted_blocks=1 00:23:12.379 00:23:12.379 ' 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:12.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.379 --rc genhtml_branch_coverage=1 00:23:12.379 --rc genhtml_function_coverage=1 00:23:12.379 --rc genhtml_legend=1 00:23:12.379 --rc geninfo_all_blocks=1 00:23:12.379 --rc geninfo_unexecuted_blocks=1 00:23:12.379 00:23:12.379 ' 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:12.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.379 --rc genhtml_branch_coverage=1 00:23:12.379 --rc genhtml_function_coverage=1 00:23:12.379 --rc genhtml_legend=1 00:23:12.379 --rc geninfo_all_blocks=1 00:23:12.379 --rc geninfo_unexecuted_blocks=1 00:23:12.379 00:23:12.379 ' 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:12.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.379 --rc genhtml_branch_coverage=1 00:23:12.379 --rc genhtml_function_coverage=1 00:23:12.379 --rc genhtml_legend=1 00:23:12.379 --rc geninfo_all_blocks=1 00:23:12.379 --rc geninfo_unexecuted_blocks=1 00:23:12.379 00:23:12.379 ' 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:12.379 21:54:19 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:12.380 21:54:19 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:12.380 21:54:19 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:12.380 21:54:19 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:12.380 21:54:19 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:12.380 21:54:19 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:12.380 21:54:19 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:12.380 21:54:19 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:12.380 21:54:19 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:12.380 21:54:19 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:12.380 21:54:19 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:12.380 21:54:19 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:12.380 21:54:19 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:12.380 21:54:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:23:12.380 21:54:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:23:12.380 21:54:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:23:12.380 21:54:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:12.380 21:54:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:23:12.380 21:54:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=79213 00:23:12.380 21:54:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:23:12.380 21:54:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:23:12.380 21:54:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 79213 00:23:12.380 21:54:20 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 79213 ']' 00:23:12.380 21:54:20 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.380 21:54:20 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:12.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:12.380 21:54:20 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.380 21:54:20 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:12.380 21:54:20 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:12.380 [2024-12-10 21:54:20.100793] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:23:12.380 [2024-12-10 21:54:20.100941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79213 ] 00:23:12.639 [2024-12-10 21:54:20.283768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.898 [2024-12-10 21:54:20.405440] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.465 21:54:20 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:13.465 21:54:20 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:23:13.465 21:54:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:13.465 21:54:20 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:23:13.465 21:54:20 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:13.465 21:54:20 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:23:13.465 21:54:20 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:23:13.465 21:54:20 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:13.724 21:54:21 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:13.724 21:54:21 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:23:13.724 21:54:21 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:13.724 21:54:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:23:13.724 21:54:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:13.724 21:54:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:23:13.724 21:54:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:23:13.724 21:54:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:13.724 21:54:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:13.724 { 00:23:13.724 "name": "nvme0n1", 00:23:13.724 "aliases": [ 00:23:13.724 "b97b1c24-8555-4c94-8907-8186d671cafd" 00:23:13.724 ], 00:23:13.724 "product_name": "NVMe disk", 00:23:13.724 "block_size": 4096, 00:23:13.724 "num_blocks": 1310720, 00:23:13.724 "uuid": "b97b1c24-8555-4c94-8907-8186d671cafd", 00:23:13.724 "numa_id": -1, 00:23:13.724 "assigned_rate_limits": { 00:23:13.724 "rw_ios_per_sec": 0, 00:23:13.724 "rw_mbytes_per_sec": 0, 00:23:13.724 "r_mbytes_per_sec": 0, 00:23:13.724 "w_mbytes_per_sec": 0 00:23:13.724 }, 00:23:13.724 "claimed": true, 00:23:13.724 "claim_type": "read_many_write_one", 00:23:13.724 "zoned": false, 00:23:13.724 "supported_io_types": { 00:23:13.724 "read": true, 00:23:13.724 "write": true, 00:23:13.724 "unmap": true, 00:23:13.724 "flush": true, 00:23:13.724 "reset": true, 00:23:13.724 "nvme_admin": true, 00:23:13.724 "nvme_io": true, 00:23:13.724 "nvme_io_md": false, 00:23:13.724 "write_zeroes": true, 00:23:13.724 "zcopy": false, 00:23:13.724 "get_zone_info": false, 00:23:13.724 "zone_management": false, 00:23:13.724 "zone_append": false, 00:23:13.724 "compare": true, 00:23:13.724 "compare_and_write": false, 00:23:13.724 "abort": true, 00:23:13.724 "seek_hole": false, 00:23:13.724 "seek_data": false, 00:23:13.724 "copy": true, 00:23:13.724 "nvme_iov_md": false 00:23:13.724 }, 00:23:13.724 "driver_specific": { 00:23:13.724 "nvme": [ 00:23:13.724 { 00:23:13.724 "pci_address": "0000:00:11.0", 00:23:13.724 "trid": { 00:23:13.724 "trtype": "PCIe", 00:23:13.724 "traddr": "0000:00:11.0" 00:23:13.724 }, 00:23:13.724 "ctrlr_data": { 00:23:13.724 "cntlid": 0, 00:23:13.724 "vendor_id": "0x1b36", 00:23:13.724 "model_number": "QEMU NVMe Ctrl", 00:23:13.724 "serial_number": "12341", 00:23:13.724 "firmware_revision": "8.0.0", 00:23:13.724 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:13.724 "oacs": { 00:23:13.724 "security": 0, 00:23:13.724 "format": 1, 00:23:13.724 "firmware": 0, 00:23:13.724 "ns_manage": 1 00:23:13.724 }, 00:23:13.724 "multi_ctrlr": false, 00:23:13.724 "ana_reporting": false 00:23:13.724 }, 00:23:13.724 "vs": { 00:23:13.724 "nvme_version": "1.4" 00:23:13.724 }, 00:23:13.724 "ns_data": { 00:23:13.724 "id": 1, 00:23:13.724 "can_share": false 00:23:13.724 } 00:23:13.724 } 00:23:13.724 ], 00:23:13.724 "mp_policy": "active_passive" 00:23:13.724 } 00:23:13.724 } 00:23:13.724 ]' 00:23:13.724 21:54:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:13.983 21:54:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:23:13.983 21:54:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:13.983 21:54:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:23:13.983 21:54:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:23:13.983 21:54:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:23:13.983 21:54:21 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:23:13.983 21:54:21 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:13.983 21:54:21 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:23:13.983 21:54:21 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:13.983 21:54:21 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:13.983 21:54:21 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=e0f8a8a3-bfca-48e9-9582-426980e6fc31 00:23:13.983 21:54:21 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:23:13.983 21:54:21 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e0f8a8a3-bfca-48e9-9582-426980e6fc31 00:23:14.242 21:54:21 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:14.500 21:54:22 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=fa03631b-bbca-4314-8dae-4fbd91888ce5 00:23:14.500 21:54:22 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u fa03631b-bbca-4314-8dae-4fbd91888ce5 00:23:14.759 21:54:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=dc6d3319-4217-4811-9129-c5c41a9b8702 00:23:14.759 21:54:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 dc6d3319-4217-4811-9129-c5c41a9b8702 00:23:14.759 21:54:22 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:23:14.759 21:54:22 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:14.759 21:54:22 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=dc6d3319-4217-4811-9129-c5c41a9b8702 00:23:14.759 21:54:22 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:23:14.759 21:54:22 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size dc6d3319-4217-4811-9129-c5c41a9b8702 00:23:14.759 21:54:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=dc6d3319-4217-4811-9129-c5c41a9b8702 00:23:14.759 21:54:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:14.759 21:54:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:23:14.759 21:54:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:23:14.759 21:54:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dc6d3319-4217-4811-9129-c5c41a9b8702 00:23:15.018 21:54:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:15.018 { 00:23:15.018 "name": "dc6d3319-4217-4811-9129-c5c41a9b8702", 00:23:15.018 "aliases": [ 00:23:15.018 "lvs/nvme0n1p0" 00:23:15.018 ], 00:23:15.018 "product_name": "Logical Volume", 00:23:15.018 "block_size": 4096, 00:23:15.018 "num_blocks": 26476544, 00:23:15.018 "uuid": "dc6d3319-4217-4811-9129-c5c41a9b8702", 00:23:15.018 "assigned_rate_limits": { 00:23:15.018 "rw_ios_per_sec": 0, 00:23:15.018 "rw_mbytes_per_sec": 0, 00:23:15.018 "r_mbytes_per_sec": 0, 00:23:15.018 "w_mbytes_per_sec": 0 00:23:15.018 }, 00:23:15.018 "claimed": false, 00:23:15.018 "zoned": false, 00:23:15.018 "supported_io_types": { 00:23:15.018 "read": true, 00:23:15.018 "write": true, 00:23:15.018 "unmap": true, 00:23:15.018 "flush": false, 00:23:15.018 "reset": true, 00:23:15.018 "nvme_admin": false, 00:23:15.018 "nvme_io": false, 00:23:15.018 "nvme_io_md": false, 00:23:15.018 "write_zeroes": true, 00:23:15.018 "zcopy": false, 00:23:15.018 "get_zone_info": false, 00:23:15.018 "zone_management": false, 00:23:15.018 "zone_append": false, 00:23:15.018 "compare": false, 00:23:15.018 "compare_and_write": false, 00:23:15.018 "abort": false, 00:23:15.018 "seek_hole": true, 00:23:15.018 "seek_data": true, 00:23:15.018 "copy": false, 00:23:15.018 "nvme_iov_md": false 00:23:15.018 }, 00:23:15.018 "driver_specific": { 00:23:15.018 "lvol": { 00:23:15.018 "lvol_store_uuid": "fa03631b-bbca-4314-8dae-4fbd91888ce5", 00:23:15.018 "base_bdev": "nvme0n1", 00:23:15.018 "thin_provision": true, 00:23:15.018 "num_allocated_clusters": 0, 00:23:15.018 "snapshot": false, 00:23:15.018 "clone": false, 00:23:15.018 "esnap_clone": false 00:23:15.018 } 00:23:15.018 } 00:23:15.018 } 00:23:15.018 ]' 00:23:15.018 21:54:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:15.018 21:54:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:23:15.018 21:54:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:15.018 21:54:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:15.018 21:54:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:15.018 21:54:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:23:15.018 21:54:22 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:23:15.018 21:54:22 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:23:15.018 21:54:22 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:15.277 21:54:22 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:15.277 21:54:22 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:15.277 21:54:22 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size dc6d3319-4217-4811-9129-c5c41a9b8702 00:23:15.277 21:54:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=dc6d3319-4217-4811-9129-c5c41a9b8702 00:23:15.277 21:54:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:15.277 21:54:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:23:15.277 21:54:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:23:15.277 21:54:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dc6d3319-4217-4811-9129-c5c41a9b8702 00:23:15.535 21:54:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:15.535 { 00:23:15.535 "name": "dc6d3319-4217-4811-9129-c5c41a9b8702", 00:23:15.535 "aliases": [ 00:23:15.535 "lvs/nvme0n1p0" 00:23:15.535 ], 00:23:15.535 "product_name": "Logical Volume", 00:23:15.535 "block_size": 4096, 00:23:15.535 "num_blocks": 26476544, 00:23:15.535 "uuid": "dc6d3319-4217-4811-9129-c5c41a9b8702", 00:23:15.535 "assigned_rate_limits": { 00:23:15.535 "rw_ios_per_sec": 0, 00:23:15.535 "rw_mbytes_per_sec": 0, 00:23:15.535 "r_mbytes_per_sec": 0, 00:23:15.535 "w_mbytes_per_sec": 0 00:23:15.535 }, 00:23:15.535 "claimed": false, 00:23:15.535 "zoned": false, 00:23:15.535 "supported_io_types": { 00:23:15.535 "read": true, 00:23:15.535 "write": true, 00:23:15.535 "unmap": true, 00:23:15.535 "flush": false, 00:23:15.535 "reset": true, 00:23:15.535 "nvme_admin": false, 00:23:15.535 "nvme_io": false, 00:23:15.535 "nvme_io_md": false, 00:23:15.535 "write_zeroes": true, 00:23:15.535 "zcopy": false, 00:23:15.535 "get_zone_info": false, 00:23:15.535 "zone_management": false, 00:23:15.535 "zone_append": false, 00:23:15.535 "compare": false, 00:23:15.535 "compare_and_write": false, 00:23:15.535 "abort": false, 00:23:15.535 "seek_hole": true, 00:23:15.535 "seek_data": true, 00:23:15.535 "copy": false, 00:23:15.535 "nvme_iov_md": false 00:23:15.535 }, 00:23:15.535 "driver_specific": { 00:23:15.535 "lvol": { 00:23:15.535 "lvol_store_uuid": "fa03631b-bbca-4314-8dae-4fbd91888ce5", 00:23:15.535 "base_bdev": "nvme0n1", 00:23:15.535 "thin_provision": true, 00:23:15.535 "num_allocated_clusters": 0, 00:23:15.535 "snapshot": false, 00:23:15.535 "clone": false, 00:23:15.535 "esnap_clone": false 00:23:15.535 } 00:23:15.535 } 00:23:15.535 } 00:23:15.535 ]' 00:23:15.535 21:54:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:15.536 21:54:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:23:15.536 21:54:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:15.536 21:54:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:15.536 21:54:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:15.536 21:54:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:23:15.536 21:54:23 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:23:15.536 21:54:23 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:15.794 21:54:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:23:15.794 21:54:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size dc6d3319-4217-4811-9129-c5c41a9b8702 00:23:15.794 21:54:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=dc6d3319-4217-4811-9129-c5c41a9b8702 00:23:15.794 21:54:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:15.794 21:54:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:23:15.794 21:54:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:23:15.794 21:54:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dc6d3319-4217-4811-9129-c5c41a9b8702 00:23:16.052 21:54:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:16.052 { 00:23:16.052 "name": "dc6d3319-4217-4811-9129-c5c41a9b8702", 00:23:16.052 "aliases": [ 00:23:16.052 "lvs/nvme0n1p0" 00:23:16.052 ], 00:23:16.052 "product_name": "Logical Volume", 00:23:16.052 "block_size": 4096, 00:23:16.052 "num_blocks": 26476544, 00:23:16.052 "uuid": "dc6d3319-4217-4811-9129-c5c41a9b8702", 00:23:16.052 "assigned_rate_limits": { 00:23:16.052 "rw_ios_per_sec": 0, 00:23:16.052 "rw_mbytes_per_sec": 0, 00:23:16.052 "r_mbytes_per_sec": 0, 00:23:16.052 "w_mbytes_per_sec": 0 00:23:16.053 }, 00:23:16.053 "claimed": false, 00:23:16.053 "zoned": false, 00:23:16.053 "supported_io_types": { 00:23:16.053 "read": true, 00:23:16.053 "write": true, 00:23:16.053 "unmap": true, 00:23:16.053 "flush": false, 00:23:16.053 "reset": true, 00:23:16.053 "nvme_admin": false, 00:23:16.053 "nvme_io": false, 00:23:16.053 "nvme_io_md": false, 00:23:16.053 "write_zeroes": true, 00:23:16.053 "zcopy": false, 00:23:16.053 "get_zone_info": false, 00:23:16.053 "zone_management": false, 00:23:16.053 "zone_append": false, 00:23:16.053 "compare": false, 00:23:16.053 "compare_and_write": false, 00:23:16.053 "abort": false, 00:23:16.053 "seek_hole": true, 00:23:16.053 "seek_data": true, 00:23:16.053 "copy": false, 00:23:16.053 "nvme_iov_md": false 00:23:16.053 }, 00:23:16.053 "driver_specific": { 00:23:16.053 "lvol": { 00:23:16.053 "lvol_store_uuid": "fa03631b-bbca-4314-8dae-4fbd91888ce5", 00:23:16.053 "base_bdev": "nvme0n1", 00:23:16.053 "thin_provision": true, 00:23:16.053 "num_allocated_clusters": 0, 00:23:16.053 "snapshot": false, 00:23:16.053 "clone": false, 00:23:16.053 "esnap_clone": false 00:23:16.053 } 00:23:16.053 } 00:23:16.053 } 00:23:16.053 ]' 00:23:16.053 21:54:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:16.053 21:54:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:23:16.053 21:54:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:16.053 21:54:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:16.053 21:54:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:16.053 21:54:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:23:16.053 21:54:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:23:16.053 21:54:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d dc6d3319-4217-4811-9129-c5c41a9b8702 -c nvc0n1p0 --l2p_dram_limit 20 00:23:16.312 [2024-12-10 21:54:23.810621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.312 [2024-12-10 21:54:23.810680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:16.312 [2024-12-10 21:54:23.810697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:16.312 [2024-12-10 21:54:23.810710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.312 [2024-12-10 21:54:23.810756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.312 [2024-12-10 21:54:23.810770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:16.312 [2024-12-10 21:54:23.810781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:23:16.312 [2024-12-10 21:54:23.810794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.312 [2024-12-10 21:54:23.810812] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:16.312 [2024-12-10 21:54:23.811654] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:16.313 [2024-12-10 21:54:23.811683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.313 [2024-12-10 21:54:23.811697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:16.313 [2024-12-10 21:54:23.811709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.877 ms 00:23:16.313 [2024-12-10 21:54:23.811722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.313 [2024-12-10 21:54:23.811784] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 073cceef-01c1-443d-ac56-f342e97b76a4 00:23:16.313 [2024-12-10 21:54:23.813494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.313 [2024-12-10 21:54:23.813529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:16.313 [2024-12-10 21:54:23.813547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:16.313 [2024-12-10 21:54:23.813557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.313 [2024-12-10 21:54:23.826557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.313 [2024-12-10 21:54:23.826584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:16.313 [2024-12-10 21:54:23.826601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.982 ms 00:23:16.313 [2024-12-10 21:54:23.826615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.313 [2024-12-10 21:54:23.826715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.313 [2024-12-10 21:54:23.826731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:16.313 [2024-12-10 21:54:23.826749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:23:16.313 [2024-12-10 21:54:23.826759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.313 [2024-12-10 21:54:23.826816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.313 [2024-12-10 21:54:23.826828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:16.313 [2024-12-10 21:54:23.826841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:16.313 [2024-12-10 21:54:23.826851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.313 [2024-12-10 21:54:23.826878] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:16.313 [2024-12-10 21:54:23.832408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.313 [2024-12-10 21:54:23.832443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:16.313 [2024-12-10 21:54:23.832455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.551 ms 00:23:16.313 [2024-12-10 21:54:23.832471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.313 [2024-12-10 21:54:23.832504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.313 [2024-12-10 21:54:23.832519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:16.313 [2024-12-10 21:54:23.832530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:16.313 [2024-12-10 21:54:23.832543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.313 [2024-12-10 21:54:23.832572] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:16.313 [2024-12-10 21:54:23.832711] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:16.313 [2024-12-10 21:54:23.832726] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:16.313 [2024-12-10 21:54:23.832743] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:16.313 [2024-12-10 21:54:23.832755] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:16.313 [2024-12-10 21:54:23.832772] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:16.313 [2024-12-10 21:54:23.832783] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:16.313 [2024-12-10 21:54:23.832798] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:16.313 [2024-12-10 21:54:23.832808] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:16.313 [2024-12-10 21:54:23.832820] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:16.313 [2024-12-10 21:54:23.832833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.313 [2024-12-10 21:54:23.832848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:16.313 [2024-12-10 21:54:23.832858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:23:16.313 [2024-12-10 21:54:23.832871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.313 [2024-12-10 21:54:23.832941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.313 [2024-12-10 21:54:23.832956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:16.313 [2024-12-10 21:54:23.832967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:23:16.313 [2024-12-10 21:54:23.832982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.313 [2024-12-10 21:54:23.833067] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:16.313 [2024-12-10 21:54:23.833089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:16.313 [2024-12-10 21:54:23.833100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:16.313 [2024-12-10 21:54:23.833113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:16.313 [2024-12-10 21:54:23.833124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:16.313 [2024-12-10 21:54:23.833135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:16.313 [2024-12-10 21:54:23.833146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:16.313 [2024-12-10 21:54:23.833157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:16.313 [2024-12-10 21:54:23.833167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:16.313 [2024-12-10 21:54:23.833181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:16.313 [2024-12-10 21:54:23.833191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:16.313 [2024-12-10 21:54:23.833214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:16.313 [2024-12-10 21:54:23.833226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:16.313 [2024-12-10 21:54:23.833239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:16.313 [2024-12-10 21:54:23.833249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:16.313 [2024-12-10 21:54:23.833266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:16.313 [2024-12-10 21:54:23.833275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:16.313 [2024-12-10 21:54:23.833286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:16.313 [2024-12-10 21:54:23.833295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:16.313 [2024-12-10 21:54:23.833307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:16.313 [2024-12-10 21:54:23.833316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:16.313 [2024-12-10 21:54:23.833327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:16.313 [2024-12-10 21:54:23.833336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:16.313 [2024-12-10 21:54:23.833347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:16.313 [2024-12-10 21:54:23.833356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:16.313 [2024-12-10 21:54:23.833369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:16.313 [2024-12-10 21:54:23.833377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:16.313 [2024-12-10 21:54:23.833390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:16.313 [2024-12-10 21:54:23.833399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:16.313 [2024-12-10 21:54:23.833411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:16.313 [2024-12-10 21:54:23.833420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:16.313 [2024-12-10 21:54:23.833435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:16.313 [2024-12-10 21:54:23.833444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:16.313 [2024-12-10 21:54:23.833458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:16.313 [2024-12-10 21:54:23.833468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:16.313 [2024-12-10 21:54:23.833479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:16.313 [2024-12-10 21:54:23.833487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:16.313 [2024-12-10 21:54:23.833499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:16.313 [2024-12-10 21:54:23.833508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:16.313 [2024-12-10 21:54:23.833519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:16.313 [2024-12-10 21:54:23.833529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:16.313 [2024-12-10 21:54:23.833541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:16.313 [2024-12-10 21:54:23.833550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:16.313 [2024-12-10 21:54:23.833561] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:16.313 [2024-12-10 21:54:23.833571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:16.313 [2024-12-10 21:54:23.833583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:16.313 [2024-12-10 21:54:23.833592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:16.313 [2024-12-10 21:54:23.833608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:16.314 [2024-12-10 21:54:23.833617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:16.314 [2024-12-10 21:54:23.833628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:16.314 [2024-12-10 21:54:23.833637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:16.314 [2024-12-10 21:54:23.833648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:16.314 [2024-12-10 21:54:23.833657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:16.314 [2024-12-10 21:54:23.833670] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:16.314 [2024-12-10 21:54:23.833681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:16.314 [2024-12-10 21:54:23.833696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:16.314 [2024-12-10 21:54:23.833706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:16.314 [2024-12-10 21:54:23.833719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:16.314 [2024-12-10 21:54:23.833728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:16.314 [2024-12-10 21:54:23.833741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:16.314 [2024-12-10 21:54:23.833751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:16.314 [2024-12-10 21:54:23.833765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:16.314 [2024-12-10 21:54:23.833775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:16.314 [2024-12-10 21:54:23.833790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:16.314 [2024-12-10 21:54:23.833799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:16.314 [2024-12-10 21:54:23.833811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:16.314 [2024-12-10 21:54:23.833820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:16.314 [2024-12-10 21:54:23.833833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:16.314 [2024-12-10 21:54:23.833844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:16.314 [2024-12-10 21:54:23.833856] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:16.314 [2024-12-10 21:54:23.833867] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:16.314 [2024-12-10 21:54:23.833883] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:16.314 [2024-12-10 21:54:23.833893] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:16.314 [2024-12-10 21:54:23.833906] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:16.314 [2024-12-10 21:54:23.833915] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:16.314 [2024-12-10 21:54:23.833930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.314 [2024-12-10 21:54:23.833941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:16.314 [2024-12-10 21:54:23.833954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.921 ms 00:23:16.314 [2024-12-10 21:54:23.833964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.314 [2024-12-10 21:54:23.834003] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:16.314 [2024-12-10 21:54:23.834015] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:20.503 [2024-12-10 21:54:27.788094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.503 [2024-12-10 21:54:27.788175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:20.503 [2024-12-10 21:54:27.788195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3960.510 ms 00:23:20.503 [2024-12-10 21:54:27.788207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.503 [2024-12-10 21:54:27.835586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.503 [2024-12-10 21:54:27.835641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:20.503 [2024-12-10 21:54:27.835670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.972 ms 00:23:20.503 [2024-12-10 21:54:27.835682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.503 [2024-12-10 21:54:27.835816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.503 [2024-12-10 21:54:27.835831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:20.503 [2024-12-10 21:54:27.835849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:23:20.503 [2024-12-10 21:54:27.835858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.503 [2024-12-10 21:54:27.891441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.503 [2024-12-10 21:54:27.891491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:20.503 [2024-12-10 21:54:27.891509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.630 ms 00:23:20.503 [2024-12-10 21:54:27.891520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.503 [2024-12-10 21:54:27.891566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.503 [2024-12-10 21:54:27.891577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:20.503 [2024-12-10 21:54:27.891591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:20.503 [2024-12-10 21:54:27.891605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.503 [2024-12-10 21:54:27.892408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.503 [2024-12-10 21:54:27.892429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:20.503 [2024-12-10 21:54:27.892444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.726 ms 00:23:20.503 [2024-12-10 21:54:27.892454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.503 [2024-12-10 21:54:27.892569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.503 [2024-12-10 21:54:27.892583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:20.504 [2024-12-10 21:54:27.892600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:23:20.504 [2024-12-10 21:54:27.892610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.504 [2024-12-10 21:54:27.913307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.504 [2024-12-10 21:54:27.913343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:20.504 [2024-12-10 21:54:27.913360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.703 ms 00:23:20.504 [2024-12-10 21:54:27.913383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.504 [2024-12-10 21:54:27.926880] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:23:20.504 [2024-12-10 21:54:27.935795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.504 [2024-12-10 21:54:27.935832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:20.504 [2024-12-10 21:54:27.935845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.370 ms 00:23:20.504 [2024-12-10 21:54:27.935859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.504 [2024-12-10 21:54:28.038030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.504 [2024-12-10 21:54:28.038083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:20.504 [2024-12-10 21:54:28.038097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.308 ms 00:23:20.504 [2024-12-10 21:54:28.038111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.504 [2024-12-10 21:54:28.038290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.504 [2024-12-10 21:54:28.038312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:20.504 [2024-12-10 21:54:28.038324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:23:20.504 [2024-12-10 21:54:28.038342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.504 [2024-12-10 21:54:28.072627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.504 [2024-12-10 21:54:28.072671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:20.504 [2024-12-10 21:54:28.072686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.285 ms 00:23:20.504 [2024-12-10 21:54:28.072699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.504 [2024-12-10 21:54:28.105836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.504 [2024-12-10 21:54:28.105878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:20.504 [2024-12-10 21:54:28.105892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.151 ms 00:23:20.504 [2024-12-10 21:54:28.105905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.504 [2024-12-10 21:54:28.106654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.504 [2024-12-10 21:54:28.106687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:20.504 [2024-12-10 21:54:28.106699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.714 ms 00:23:20.504 [2024-12-10 21:54:28.106712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.504 [2024-12-10 21:54:28.208313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.504 [2024-12-10 21:54:28.208360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:20.504 [2024-12-10 21:54:28.208374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.715 ms 00:23:20.504 [2024-12-10 21:54:28.208387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.763 [2024-12-10 21:54:28.245217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.763 [2024-12-10 21:54:28.245260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:20.763 [2024-12-10 21:54:28.245278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.815 ms 00:23:20.763 [2024-12-10 21:54:28.245292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.763 [2024-12-10 21:54:28.279474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.763 [2024-12-10 21:54:28.279516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:20.763 [2024-12-10 21:54:28.279530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.198 ms 00:23:20.763 [2024-12-10 21:54:28.279543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.763 [2024-12-10 21:54:28.313037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.763 [2024-12-10 21:54:28.313089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:20.763 [2024-12-10 21:54:28.313102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.512 ms 00:23:20.763 [2024-12-10 21:54:28.313116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.763 [2024-12-10 21:54:28.313157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.763 [2024-12-10 21:54:28.313175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:20.763 [2024-12-10 21:54:28.313187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:20.763 [2024-12-10 21:54:28.313200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.763 [2024-12-10 21:54:28.313298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.763 [2024-12-10 21:54:28.313314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:20.763 [2024-12-10 21:54:28.313326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:23:20.763 [2024-12-10 21:54:28.313339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.763 [2024-12-10 21:54:28.314592] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4510.781 ms, result 0 00:23:20.763 { 00:23:20.763 "name": "ftl0", 00:23:20.763 "uuid": "073cceef-01c1-443d-ac56-f342e97b76a4" 00:23:20.763 } 00:23:20.763 21:54:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:23:20.763 21:54:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:23:20.763 21:54:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:23:21.021 21:54:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:23:21.021 [2024-12-10 21:54:28.618325] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:23:21.021 I/O size of 69632 is greater than zero copy threshold (65536). 00:23:21.021 Zero copy mechanism will not be used. 00:23:21.021 Running I/O for 4 seconds... 00:23:22.893 1374.00 IOPS, 91.24 MiB/s [2024-12-10T21:54:32.004Z] 1385.50 IOPS, 92.01 MiB/s [2024-12-10T21:54:32.939Z] 1408.00 IOPS, 93.50 MiB/s [2024-12-10T21:54:32.939Z] 1430.00 IOPS, 94.96 MiB/s 00:23:25.208 Latency(us) 00:23:25.208 [2024-12-10T21:54:32.939Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:25.208 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:23:25.208 ftl0 : 4.00 1429.63 94.94 0.00 0.00 733.25 259.91 2158.21 00:23:25.208 [2024-12-10T21:54:32.939Z] =================================================================================================================== 00:23:25.208 [2024-12-10T21:54:32.939Z] Total : 1429.63 94.94 0.00 0.00 733.25 259.91 2158.21 00:23:25.208 [2024-12-10 21:54:32.622070] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:23:25.208 { 00:23:25.208 "results": [ 00:23:25.208 { 00:23:25.209 "job": "ftl0", 00:23:25.209 "core_mask": "0x1", 00:23:25.209 "workload": "randwrite", 00:23:25.209 "status": "finished", 00:23:25.209 "queue_depth": 1, 00:23:25.209 "io_size": 69632, 00:23:25.209 "runtime": 4.001742, 00:23:25.209 "iops": 1429.6273972684896, 00:23:25.209 "mibps": 94.93619434986064, 00:23:25.209 "io_failed": 0, 00:23:25.209 "io_timeout": 0, 00:23:25.209 "avg_latency_us": 733.248577178843, 00:23:25.209 "min_latency_us": 259.906827309237, 00:23:25.209 "max_latency_us": 2158.213654618474 00:23:25.209 } 00:23:25.209 ], 00:23:25.209 "core_count": 1 00:23:25.209 } 00:23:25.209 21:54:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:23:25.209 [2024-12-10 21:54:32.735436] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:23:25.209 Running I/O for 4 seconds... 00:23:27.080 9766.00 IOPS, 38.15 MiB/s [2024-12-10T21:54:35.745Z] 9372.50 IOPS, 36.61 MiB/s [2024-12-10T21:54:37.123Z] 9621.33 IOPS, 37.58 MiB/s [2024-12-10T21:54:37.123Z] 9254.00 IOPS, 36.15 MiB/s 00:23:29.392 Latency(us) 00:23:29.392 [2024-12-10T21:54:37.123Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.392 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:23:29.392 ftl0 : 4.04 9204.21 35.95 0.00 0.00 13846.61 222.07 40427.03 00:23:29.392 [2024-12-10T21:54:37.123Z] =================================================================================================================== 00:23:29.392 [2024-12-10T21:54:37.123Z] Total : 9204.21 35.95 0.00 0.00 13846.61 0.00 40427.03 00:23:29.392 [2024-12-10 21:54:36.777448] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:23:29.393 { 00:23:29.393 "results": [ 00:23:29.393 { 00:23:29.393 "job": "ftl0", 00:23:29.393 "core_mask": "0x1", 00:23:29.393 "workload": "randwrite", 00:23:29.393 "status": "finished", 00:23:29.393 "queue_depth": 128, 00:23:29.393 "io_size": 4096, 00:23:29.393 "runtime": 4.035112, 00:23:29.393 "iops": 9204.205484259173, 00:23:29.393 "mibps": 35.953927672887396, 00:23:29.393 "io_failed": 0, 00:23:29.393 "io_timeout": 0, 00:23:29.393 "avg_latency_us": 13846.609571338666, 00:23:29.393 "min_latency_us": 222.0722891566265, 00:23:29.393 "max_latency_us": 40427.0265060241 00:23:29.393 } 00:23:29.393 ], 00:23:29.393 "core_count": 1 00:23:29.393 } 00:23:29.393 21:54:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:23:29.393 [2024-12-10 21:54:36.906168] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:23:29.393 Running I/O for 4 seconds... 00:23:31.339 7138.00 IOPS, 27.88 MiB/s [2024-12-10T21:54:40.008Z] 8017.50 IOPS, 31.32 MiB/s [2024-12-10T21:54:40.945Z] 8031.33 IOPS, 31.37 MiB/s [2024-12-10T21:54:40.945Z] 8051.50 IOPS, 31.45 MiB/s 00:23:33.214 Latency(us) 00:23:33.214 [2024-12-10T21:54:40.945Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.214 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:33.214 Verification LBA range: start 0x0 length 0x1400000 00:23:33.214 ftl0 : 4.01 8062.10 31.49 0.00 0.00 15828.24 235.23 33899.75 00:23:33.214 [2024-12-10T21:54:40.945Z] =================================================================================================================== 00:23:33.214 [2024-12-10T21:54:40.945Z] Total : 8062.10 31.49 0.00 0.00 15828.24 0.00 33899.75 00:23:33.214 [2024-12-10 21:54:40.930210] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:23:33.214 { 00:23:33.214 "results": [ 00:23:33.214 { 00:23:33.214 "job": "ftl0", 00:23:33.214 "core_mask": "0x1", 00:23:33.214 "workload": "verify", 00:23:33.214 "status": "finished", 00:23:33.214 "verify_range": { 00:23:33.214 "start": 0, 00:23:33.214 "length": 20971520 00:23:33.214 }, 00:23:33.214 "queue_depth": 128, 00:23:33.214 "io_size": 4096, 00:23:33.214 "runtime": 4.010494, 00:23:33.214 "iops": 8062.099083055604, 00:23:33.214 "mibps": 31.492574543185953, 00:23:33.214 "io_failed": 0, 00:23:33.214 "io_timeout": 0, 00:23:33.214 "avg_latency_us": 15828.236973651572, 00:23:33.214 "min_latency_us": 235.23212851405623, 00:23:33.214 "max_latency_us": 33899.74618473896 00:23:33.214 } 00:23:33.214 ], 00:23:33.214 "core_count": 1 00:23:33.214 } 00:23:33.473 21:54:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:23:33.473 [2024-12-10 21:54:41.137539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.473 [2024-12-10 21:54:41.137762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:33.473 [2024-12-10 21:54:41.137886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:33.473 [2024-12-10 21:54:41.137931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.473 [2024-12-10 21:54:41.137994] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:33.473 [2024-12-10 21:54:41.142569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.473 [2024-12-10 21:54:41.142719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:33.473 [2024-12-10 21:54:41.142918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.272 ms 00:23:33.473 [2024-12-10 21:54:41.142937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.473 [2024-12-10 21:54:41.144654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.473 [2024-12-10 21:54:41.144691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:33.473 [2024-12-10 21:54:41.144708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.687 ms 00:23:33.473 [2024-12-10 21:54:41.144722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.733 [2024-12-10 21:54:41.349008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.733 [2024-12-10 21:54:41.349204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:33.733 [2024-12-10 21:54:41.349239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 204.591 ms 00:23:33.733 [2024-12-10 21:54:41.349251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.733 [2024-12-10 21:54:41.354078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.733 [2024-12-10 21:54:41.354112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:33.733 [2024-12-10 21:54:41.354127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.791 ms 00:23:33.733 [2024-12-10 21:54:41.354157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.733 [2024-12-10 21:54:41.389210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.733 [2024-12-10 21:54:41.389248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:33.733 [2024-12-10 21:54:41.389265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.038 ms 00:23:33.733 [2024-12-10 21:54:41.389274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.733 [2024-12-10 21:54:41.410616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.733 [2024-12-10 21:54:41.410655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:33.733 [2024-12-10 21:54:41.410671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.333 ms 00:23:33.733 [2024-12-10 21:54:41.410698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.733 [2024-12-10 21:54:41.410851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.733 [2024-12-10 21:54:41.410865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:33.733 [2024-12-10 21:54:41.410882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:23:33.733 [2024-12-10 21:54:41.410892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.733 [2024-12-10 21:54:41.445469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.733 [2024-12-10 21:54:41.445599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:33.733 [2024-12-10 21:54:41.445640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.612 ms 00:23:33.733 [2024-12-10 21:54:41.445650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.993 [2024-12-10 21:54:41.480029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.993 [2024-12-10 21:54:41.480193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:33.993 [2024-12-10 21:54:41.480237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.393 ms 00:23:33.993 [2024-12-10 21:54:41.480248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.993 [2024-12-10 21:54:41.514055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.993 [2024-12-10 21:54:41.514091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:33.993 [2024-12-10 21:54:41.514106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.761 ms 00:23:33.993 [2024-12-10 21:54:41.514115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.993 [2024-12-10 21:54:41.547568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.993 [2024-12-10 21:54:41.547604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:33.993 [2024-12-10 21:54:41.547623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.416 ms 00:23:33.993 [2024-12-10 21:54:41.547648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.993 [2024-12-10 21:54:41.547689] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:33.993 [2024-12-10 21:54:41.547707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:33.993 [2024-12-10 21:54:41.547722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:33.993 [2024-12-10 21:54:41.547733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:33.993 [2024-12-10 21:54:41.547747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:33.993 [2024-12-10 21:54:41.547757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:33.993 [2024-12-10 21:54:41.547771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:33.993 [2024-12-10 21:54:41.547781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:33.993 [2024-12-10 21:54:41.547794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:33.993 [2024-12-10 21:54:41.547804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:33.993 [2024-12-10 21:54:41.547817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:33.993 [2024-12-10 21:54:41.547828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:33.993 [2024-12-10 21:54:41.547852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:33.993 [2024-12-10 21:54:41.547862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:33.993 [2024-12-10 21:54:41.547878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:33.993 [2024-12-10 21:54:41.547889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:33.993 [2024-12-10 21:54:41.547902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:33.993 [2024-12-10 21:54:41.547913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:33.993 [2024-12-10 21:54:41.547928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:33.993 [2024-12-10 21:54:41.547938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:33.993 [2024-12-10 21:54:41.547952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:33.993 [2024-12-10 21:54:41.547962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:33.993 [2024-12-10 21:54:41.547975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:33.993 [2024-12-10 21:54:41.547985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:33.993 [2024-12-10 21:54:41.547998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.548995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.549006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.549019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:33.994 [2024-12-10 21:54:41.549037] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:33.994 [2024-12-10 21:54:41.549051] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 073cceef-01c1-443d-ac56-f342e97b76a4 00:23:33.994 [2024-12-10 21:54:41.549064] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:33.994 [2024-12-10 21:54:41.549077] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:33.994 [2024-12-10 21:54:41.549087] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:33.994 [2024-12-10 21:54:41.549109] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:33.994 [2024-12-10 21:54:41.549119] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:33.994 [2024-12-10 21:54:41.549132] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:33.994 [2024-12-10 21:54:41.549141] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:33.994 [2024-12-10 21:54:41.549156] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:33.994 [2024-12-10 21:54:41.549164] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:33.995 [2024-12-10 21:54:41.549176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.995 [2024-12-10 21:54:41.549187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:33.995 [2024-12-10 21:54:41.549201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.492 ms 00:23:33.995 [2024-12-10 21:54:41.549212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.995 [2024-12-10 21:54:41.568574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.995 [2024-12-10 21:54:41.568609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:33.995 [2024-12-10 21:54:41.568624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.339 ms 00:23:33.995 [2024-12-10 21:54:41.568634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.995 [2024-12-10 21:54:41.569305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.995 [2024-12-10 21:54:41.569323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:33.995 [2024-12-10 21:54:41.569338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.649 ms 00:23:33.995 [2024-12-10 21:54:41.569348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.995 [2024-12-10 21:54:41.622854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.995 [2024-12-10 21:54:41.623028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:33.995 [2024-12-10 21:54:41.623071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.995 [2024-12-10 21:54:41.623084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.995 [2024-12-10 21:54:41.623145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.995 [2024-12-10 21:54:41.623156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:33.995 [2024-12-10 21:54:41.623169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.995 [2024-12-10 21:54:41.623180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.995 [2024-12-10 21:54:41.623287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.995 [2024-12-10 21:54:41.623302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:33.995 [2024-12-10 21:54:41.623316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.995 [2024-12-10 21:54:41.623325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.995 [2024-12-10 21:54:41.623347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.995 [2024-12-10 21:54:41.623358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:33.995 [2024-12-10 21:54:41.623371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.995 [2024-12-10 21:54:41.623381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.254 [2024-12-10 21:54:41.744798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:34.254 [2024-12-10 21:54:41.744847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:34.254 [2024-12-10 21:54:41.744869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:34.254 [2024-12-10 21:54:41.744881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.254 [2024-12-10 21:54:41.840956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:34.255 [2024-12-10 21:54:41.841007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:34.255 [2024-12-10 21:54:41.841023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:34.255 [2024-12-10 21:54:41.841034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.255 [2024-12-10 21:54:41.841190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:34.255 [2024-12-10 21:54:41.841205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:34.255 [2024-12-10 21:54:41.841235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:34.255 [2024-12-10 21:54:41.841246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.255 [2024-12-10 21:54:41.841298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:34.255 [2024-12-10 21:54:41.841311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:34.255 [2024-12-10 21:54:41.841324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:34.255 [2024-12-10 21:54:41.841334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.255 [2024-12-10 21:54:41.841460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:34.255 [2024-12-10 21:54:41.841478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:34.255 [2024-12-10 21:54:41.841495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:34.255 [2024-12-10 21:54:41.841505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.255 [2024-12-10 21:54:41.841547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:34.255 [2024-12-10 21:54:41.841560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:34.255 [2024-12-10 21:54:41.841573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:34.255 [2024-12-10 21:54:41.841583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.255 [2024-12-10 21:54:41.841626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:34.255 [2024-12-10 21:54:41.841641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:34.255 [2024-12-10 21:54:41.841654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:34.255 [2024-12-10 21:54:41.841675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.255 [2024-12-10 21:54:41.841727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:34.255 [2024-12-10 21:54:41.841740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:34.255 [2024-12-10 21:54:41.841754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:34.255 [2024-12-10 21:54:41.841765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.255 [2024-12-10 21:54:41.841899] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 705.455 ms, result 0 00:23:34.255 true 00:23:34.255 21:54:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 79213 00:23:34.255 21:54:41 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 79213 ']' 00:23:34.255 21:54:41 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 79213 00:23:34.255 21:54:41 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:23:34.255 21:54:41 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:34.255 21:54:41 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79213 00:23:34.255 killing process with pid 79213 00:23:34.255 Received shutdown signal, test time was about 4.000000 seconds 00:23:34.255 00:23:34.255 Latency(us) 00:23:34.255 [2024-12-10T21:54:41.986Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:34.255 [2024-12-10T21:54:41.986Z] =================================================================================================================== 00:23:34.255 [2024-12-10T21:54:41.986Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:34.255 21:54:41 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:34.255 21:54:41 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:34.255 21:54:41 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79213' 00:23:34.255 21:54:41 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 79213 00:23:34.255 21:54:41 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 79213 00:23:35.634 Remove shared memory files 00:23:35.634 21:54:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:35.634 21:54:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:23:35.634 21:54:43 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:35.634 21:54:43 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:23:35.634 21:54:43 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:23:35.634 21:54:43 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:23:35.634 21:54:43 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:35.634 21:54:43 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:23:35.634 ************************************ 00:23:35.634 END TEST ftl_bdevperf 00:23:35.634 ************************************ 00:23:35.634 00:23:35.634 real 0m23.495s 00:23:35.634 user 0m25.853s 00:23:35.634 sys 0m1.304s 00:23:35.634 21:54:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:35.634 21:54:43 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:35.634 21:54:43 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:23:35.634 21:54:43 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:35.634 21:54:43 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:35.634 21:54:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:35.634 ************************************ 00:23:35.634 START TEST ftl_trim 00:23:35.635 ************************************ 00:23:35.635 21:54:43 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:23:35.894 * Looking for test storage... 00:23:35.894 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:35.894 21:54:43 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:35.894 21:54:43 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:35.894 21:54:43 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:23:35.894 21:54:43 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:35.894 21:54:43 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:35.894 21:54:43 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:35.894 21:54:43 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:35.894 21:54:43 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:23:35.894 21:54:43 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:23:35.894 21:54:43 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:23:35.894 21:54:43 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:23:35.894 21:54:43 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:23:35.894 21:54:43 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:23:35.894 21:54:43 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:23:35.894 21:54:43 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:35.894 21:54:43 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:23:35.894 21:54:43 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:23:35.894 21:54:43 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:35.894 21:54:43 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:35.894 21:54:43 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:23:35.894 21:54:43 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:23:35.894 21:54:43 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:35.894 21:54:43 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:23:35.894 21:54:43 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:23:35.894 21:54:43 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:23:35.894 21:54:43 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:23:35.894 21:54:43 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:35.894 21:54:43 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:23:35.894 21:54:43 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:23:35.894 21:54:43 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:35.894 21:54:43 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:35.894 21:54:43 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:23:35.894 21:54:43 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:35.894 21:54:43 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:35.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:35.895 --rc genhtml_branch_coverage=1 00:23:35.895 --rc genhtml_function_coverage=1 00:23:35.895 --rc genhtml_legend=1 00:23:35.895 --rc geninfo_all_blocks=1 00:23:35.895 --rc geninfo_unexecuted_blocks=1 00:23:35.895 00:23:35.895 ' 00:23:35.895 21:54:43 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:35.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:35.895 --rc genhtml_branch_coverage=1 00:23:35.895 --rc genhtml_function_coverage=1 00:23:35.895 --rc genhtml_legend=1 00:23:35.895 --rc geninfo_all_blocks=1 00:23:35.895 --rc geninfo_unexecuted_blocks=1 00:23:35.895 00:23:35.895 ' 00:23:35.895 21:54:43 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:35.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:35.895 --rc genhtml_branch_coverage=1 00:23:35.895 --rc genhtml_function_coverage=1 00:23:35.895 --rc genhtml_legend=1 00:23:35.895 --rc geninfo_all_blocks=1 00:23:35.895 --rc geninfo_unexecuted_blocks=1 00:23:35.895 00:23:35.895 ' 00:23:35.895 21:54:43 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:35.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:35.895 --rc genhtml_branch_coverage=1 00:23:35.895 --rc genhtml_function_coverage=1 00:23:35.895 --rc genhtml_legend=1 00:23:35.895 --rc geninfo_all_blocks=1 00:23:35.895 --rc geninfo_unexecuted_blocks=1 00:23:35.895 00:23:35.895 ' 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:23:35.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=79579 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:23:35.895 21:54:43 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 79579 00:23:35.895 21:54:43 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 79579 ']' 00:23:35.895 21:54:43 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.895 21:54:43 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:35.895 21:54:43 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.895 21:54:43 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:35.895 21:54:43 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:23:36.154 [2024-12-10 21:54:43.675449] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:23:36.154 [2024-12-10 21:54:43.675773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79579 ] 00:23:36.154 [2024-12-10 21:54:43.858716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:36.413 [2024-12-10 21:54:43.975923] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:36.413 [2024-12-10 21:54:43.976129] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.413 [2024-12-10 21:54:43.976176] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:37.348 21:54:44 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:37.348 21:54:44 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:23:37.348 21:54:44 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:37.348 21:54:44 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:23:37.348 21:54:44 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:37.348 21:54:44 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:23:37.348 21:54:44 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:23:37.348 21:54:44 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:37.606 21:54:45 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:37.606 21:54:45 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:23:37.606 21:54:45 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:37.606 21:54:45 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:23:37.606 21:54:45 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:37.606 21:54:45 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:23:37.606 21:54:45 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:23:37.606 21:54:45 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:37.606 21:54:45 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:37.606 { 00:23:37.606 "name": "nvme0n1", 00:23:37.606 "aliases": [ 00:23:37.606 "2a4a2d31-9979-4f79-8a6d-c6bb71b6cbd1" 00:23:37.606 ], 00:23:37.606 "product_name": "NVMe disk", 00:23:37.606 "block_size": 4096, 00:23:37.606 "num_blocks": 1310720, 00:23:37.606 "uuid": "2a4a2d31-9979-4f79-8a6d-c6bb71b6cbd1", 00:23:37.606 "numa_id": -1, 00:23:37.606 "assigned_rate_limits": { 00:23:37.606 "rw_ios_per_sec": 0, 00:23:37.606 "rw_mbytes_per_sec": 0, 00:23:37.606 "r_mbytes_per_sec": 0, 00:23:37.606 "w_mbytes_per_sec": 0 00:23:37.606 }, 00:23:37.606 "claimed": true, 00:23:37.606 "claim_type": "read_many_write_one", 00:23:37.606 "zoned": false, 00:23:37.606 "supported_io_types": { 00:23:37.606 "read": true, 00:23:37.606 "write": true, 00:23:37.606 "unmap": true, 00:23:37.606 "flush": true, 00:23:37.606 "reset": true, 00:23:37.606 "nvme_admin": true, 00:23:37.606 "nvme_io": true, 00:23:37.606 "nvme_io_md": false, 00:23:37.606 "write_zeroes": true, 00:23:37.606 "zcopy": false, 00:23:37.606 "get_zone_info": false, 00:23:37.606 "zone_management": false, 00:23:37.606 "zone_append": false, 00:23:37.606 "compare": true, 00:23:37.606 "compare_and_write": false, 00:23:37.606 "abort": true, 00:23:37.606 "seek_hole": false, 00:23:37.606 "seek_data": false, 00:23:37.606 "copy": true, 00:23:37.606 "nvme_iov_md": false 00:23:37.606 }, 00:23:37.606 "driver_specific": { 00:23:37.606 "nvme": [ 00:23:37.606 { 00:23:37.606 "pci_address": "0000:00:11.0", 00:23:37.606 "trid": { 00:23:37.606 "trtype": "PCIe", 00:23:37.606 "traddr": "0000:00:11.0" 00:23:37.606 }, 00:23:37.606 "ctrlr_data": { 00:23:37.606 "cntlid": 0, 00:23:37.606 "vendor_id": "0x1b36", 00:23:37.606 "model_number": "QEMU NVMe Ctrl", 00:23:37.606 "serial_number": "12341", 00:23:37.606 "firmware_revision": "8.0.0", 00:23:37.606 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:37.606 "oacs": { 00:23:37.606 "security": 0, 00:23:37.606 "format": 1, 00:23:37.606 "firmware": 0, 00:23:37.606 "ns_manage": 1 00:23:37.606 }, 00:23:37.606 "multi_ctrlr": false, 00:23:37.606 "ana_reporting": false 00:23:37.606 }, 00:23:37.606 "vs": { 00:23:37.606 "nvme_version": "1.4" 00:23:37.606 }, 00:23:37.606 "ns_data": { 00:23:37.606 "id": 1, 00:23:37.606 "can_share": false 00:23:37.606 } 00:23:37.606 } 00:23:37.606 ], 00:23:37.606 "mp_policy": "active_passive" 00:23:37.606 } 00:23:37.606 } 00:23:37.606 ]' 00:23:37.606 21:54:45 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:37.864 21:54:45 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:23:37.864 21:54:45 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:37.864 21:54:45 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:23:37.864 21:54:45 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:23:37.864 21:54:45 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:23:37.864 21:54:45 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:23:37.864 21:54:45 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:37.864 21:54:45 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:23:37.864 21:54:45 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:37.864 21:54:45 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:38.122 21:54:45 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=fa03631b-bbca-4314-8dae-4fbd91888ce5 00:23:38.122 21:54:45 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:23:38.122 21:54:45 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fa03631b-bbca-4314-8dae-4fbd91888ce5 00:23:38.381 21:54:45 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:38.381 21:54:46 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=bb063b47-4024-469d-a5e1-452510ec0189 00:23:38.381 21:54:46 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u bb063b47-4024-469d-a5e1-452510ec0189 00:23:38.640 21:54:46 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=cc33b0c2-0495-4708-b885-b82189210327 00:23:38.640 21:54:46 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 cc33b0c2-0495-4708-b885-b82189210327 00:23:38.640 21:54:46 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:23:38.640 21:54:46 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:38.640 21:54:46 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=cc33b0c2-0495-4708-b885-b82189210327 00:23:38.640 21:54:46 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:23:38.640 21:54:46 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size cc33b0c2-0495-4708-b885-b82189210327 00:23:38.640 21:54:46 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=cc33b0c2-0495-4708-b885-b82189210327 00:23:38.640 21:54:46 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:38.640 21:54:46 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:23:38.640 21:54:46 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:23:38.640 21:54:46 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cc33b0c2-0495-4708-b885-b82189210327 00:23:38.898 21:54:46 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:38.898 { 00:23:38.898 "name": "cc33b0c2-0495-4708-b885-b82189210327", 00:23:38.898 "aliases": [ 00:23:38.898 "lvs/nvme0n1p0" 00:23:38.898 ], 00:23:38.898 "product_name": "Logical Volume", 00:23:38.898 "block_size": 4096, 00:23:38.898 "num_blocks": 26476544, 00:23:38.898 "uuid": "cc33b0c2-0495-4708-b885-b82189210327", 00:23:38.898 "assigned_rate_limits": { 00:23:38.898 "rw_ios_per_sec": 0, 00:23:38.898 "rw_mbytes_per_sec": 0, 00:23:38.898 "r_mbytes_per_sec": 0, 00:23:38.898 "w_mbytes_per_sec": 0 00:23:38.898 }, 00:23:38.898 "claimed": false, 00:23:38.898 "zoned": false, 00:23:38.898 "supported_io_types": { 00:23:38.898 "read": true, 00:23:38.898 "write": true, 00:23:38.898 "unmap": true, 00:23:38.898 "flush": false, 00:23:38.898 "reset": true, 00:23:38.898 "nvme_admin": false, 00:23:38.898 "nvme_io": false, 00:23:38.898 "nvme_io_md": false, 00:23:38.898 "write_zeroes": true, 00:23:38.898 "zcopy": false, 00:23:38.898 "get_zone_info": false, 00:23:38.898 "zone_management": false, 00:23:38.898 "zone_append": false, 00:23:38.898 "compare": false, 00:23:38.898 "compare_and_write": false, 00:23:38.898 "abort": false, 00:23:38.898 "seek_hole": true, 00:23:38.898 "seek_data": true, 00:23:38.898 "copy": false, 00:23:38.898 "nvme_iov_md": false 00:23:38.898 }, 00:23:38.898 "driver_specific": { 00:23:38.898 "lvol": { 00:23:38.898 "lvol_store_uuid": "bb063b47-4024-469d-a5e1-452510ec0189", 00:23:38.898 "base_bdev": "nvme0n1", 00:23:38.898 "thin_provision": true, 00:23:38.898 "num_allocated_clusters": 0, 00:23:38.898 "snapshot": false, 00:23:38.898 "clone": false, 00:23:38.899 "esnap_clone": false 00:23:38.899 } 00:23:38.899 } 00:23:38.899 } 00:23:38.899 ]' 00:23:38.899 21:54:46 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:38.899 21:54:46 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:23:38.899 21:54:46 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:38.899 21:54:46 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:38.899 21:54:46 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:38.899 21:54:46 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:23:38.899 21:54:46 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:23:38.899 21:54:46 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:23:38.899 21:54:46 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:39.157 21:54:46 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:39.157 21:54:46 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:39.157 21:54:46 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size cc33b0c2-0495-4708-b885-b82189210327 00:23:39.157 21:54:46 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=cc33b0c2-0495-4708-b885-b82189210327 00:23:39.157 21:54:46 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:39.157 21:54:46 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:23:39.157 21:54:46 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:23:39.157 21:54:46 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cc33b0c2-0495-4708-b885-b82189210327 00:23:39.416 21:54:47 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:39.416 { 00:23:39.416 "name": "cc33b0c2-0495-4708-b885-b82189210327", 00:23:39.416 "aliases": [ 00:23:39.416 "lvs/nvme0n1p0" 00:23:39.416 ], 00:23:39.416 "product_name": "Logical Volume", 00:23:39.416 "block_size": 4096, 00:23:39.416 "num_blocks": 26476544, 00:23:39.416 "uuid": "cc33b0c2-0495-4708-b885-b82189210327", 00:23:39.416 "assigned_rate_limits": { 00:23:39.416 "rw_ios_per_sec": 0, 00:23:39.416 "rw_mbytes_per_sec": 0, 00:23:39.416 "r_mbytes_per_sec": 0, 00:23:39.416 "w_mbytes_per_sec": 0 00:23:39.416 }, 00:23:39.416 "claimed": false, 00:23:39.416 "zoned": false, 00:23:39.416 "supported_io_types": { 00:23:39.416 "read": true, 00:23:39.416 "write": true, 00:23:39.416 "unmap": true, 00:23:39.416 "flush": false, 00:23:39.416 "reset": true, 00:23:39.416 "nvme_admin": false, 00:23:39.416 "nvme_io": false, 00:23:39.416 "nvme_io_md": false, 00:23:39.416 "write_zeroes": true, 00:23:39.416 "zcopy": false, 00:23:39.416 "get_zone_info": false, 00:23:39.416 "zone_management": false, 00:23:39.416 "zone_append": false, 00:23:39.416 "compare": false, 00:23:39.416 "compare_and_write": false, 00:23:39.416 "abort": false, 00:23:39.416 "seek_hole": true, 00:23:39.416 "seek_data": true, 00:23:39.416 "copy": false, 00:23:39.416 "nvme_iov_md": false 00:23:39.416 }, 00:23:39.416 "driver_specific": { 00:23:39.416 "lvol": { 00:23:39.416 "lvol_store_uuid": "bb063b47-4024-469d-a5e1-452510ec0189", 00:23:39.416 "base_bdev": "nvme0n1", 00:23:39.416 "thin_provision": true, 00:23:39.416 "num_allocated_clusters": 0, 00:23:39.416 "snapshot": false, 00:23:39.416 "clone": false, 00:23:39.416 "esnap_clone": false 00:23:39.416 } 00:23:39.416 } 00:23:39.416 } 00:23:39.416 ]' 00:23:39.416 21:54:47 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:39.416 21:54:47 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:23:39.416 21:54:47 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:39.416 21:54:47 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:39.416 21:54:47 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:39.416 21:54:47 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:23:39.416 21:54:47 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:23:39.416 21:54:47 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:39.675 21:54:47 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:23:39.675 21:54:47 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:23:39.675 21:54:47 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size cc33b0c2-0495-4708-b885-b82189210327 00:23:39.675 21:54:47 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=cc33b0c2-0495-4708-b885-b82189210327 00:23:39.675 21:54:47 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:39.675 21:54:47 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:23:39.675 21:54:47 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:23:39.675 21:54:47 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cc33b0c2-0495-4708-b885-b82189210327 00:23:39.934 21:54:47 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:39.934 { 00:23:39.934 "name": "cc33b0c2-0495-4708-b885-b82189210327", 00:23:39.934 "aliases": [ 00:23:39.934 "lvs/nvme0n1p0" 00:23:39.934 ], 00:23:39.934 "product_name": "Logical Volume", 00:23:39.934 "block_size": 4096, 00:23:39.934 "num_blocks": 26476544, 00:23:39.934 "uuid": "cc33b0c2-0495-4708-b885-b82189210327", 00:23:39.934 "assigned_rate_limits": { 00:23:39.934 "rw_ios_per_sec": 0, 00:23:39.934 "rw_mbytes_per_sec": 0, 00:23:39.934 "r_mbytes_per_sec": 0, 00:23:39.934 "w_mbytes_per_sec": 0 00:23:39.934 }, 00:23:39.934 "claimed": false, 00:23:39.934 "zoned": false, 00:23:39.934 "supported_io_types": { 00:23:39.934 "read": true, 00:23:39.934 "write": true, 00:23:39.934 "unmap": true, 00:23:39.934 "flush": false, 00:23:39.934 "reset": true, 00:23:39.934 "nvme_admin": false, 00:23:39.934 "nvme_io": false, 00:23:39.934 "nvme_io_md": false, 00:23:39.934 "write_zeroes": true, 00:23:39.934 "zcopy": false, 00:23:39.934 "get_zone_info": false, 00:23:39.934 "zone_management": false, 00:23:39.934 "zone_append": false, 00:23:39.934 "compare": false, 00:23:39.934 "compare_and_write": false, 00:23:39.934 "abort": false, 00:23:39.934 "seek_hole": true, 00:23:39.934 "seek_data": true, 00:23:39.934 "copy": false, 00:23:39.934 "nvme_iov_md": false 00:23:39.934 }, 00:23:39.934 "driver_specific": { 00:23:39.934 "lvol": { 00:23:39.934 "lvol_store_uuid": "bb063b47-4024-469d-a5e1-452510ec0189", 00:23:39.934 "base_bdev": "nvme0n1", 00:23:39.934 "thin_provision": true, 00:23:39.934 "num_allocated_clusters": 0, 00:23:39.934 "snapshot": false, 00:23:39.934 "clone": false, 00:23:39.934 "esnap_clone": false 00:23:39.934 } 00:23:39.934 } 00:23:39.934 } 00:23:39.934 ]' 00:23:39.934 21:54:47 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:39.934 21:54:47 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:23:39.934 21:54:47 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:39.934 21:54:47 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:39.934 21:54:47 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:39.934 21:54:47 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:23:39.934 21:54:47 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:23:39.934 21:54:47 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d cc33b0c2-0495-4708-b885-b82189210327 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:23:40.194 [2024-12-10 21:54:47.767889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.194 [2024-12-10 21:54:47.768499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:40.194 [2024-12-10 21:54:47.768584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:40.194 [2024-12-10 21:54:47.768640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.194 [2024-12-10 21:54:47.772184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.194 [2024-12-10 21:54:47.772443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:40.194 [2024-12-10 21:54:47.772552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.454 ms 00:23:40.194 [2024-12-10 21:54:47.772607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.194 [2024-12-10 21:54:47.772810] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:40.194 [2024-12-10 21:54:47.774076] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:40.194 [2024-12-10 21:54:47.774267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.194 [2024-12-10 21:54:47.774404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:40.194 [2024-12-10 21:54:47.774476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.465 ms 00:23:40.194 [2024-12-10 21:54:47.774543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.194 [2024-12-10 21:54:47.774763] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c1c8a73d-fd96-4644-ab7c-747f40be9c54 00:23:40.194 [2024-12-10 21:54:47.776384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.194 [2024-12-10 21:54:47.776597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:40.194 [2024-12-10 21:54:47.776753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:40.194 [2024-12-10 21:54:47.776777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.194 [2024-12-10 21:54:47.785384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.194 [2024-12-10 21:54:47.785421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:40.194 [2024-12-10 21:54:47.785437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.446 ms 00:23:40.194 [2024-12-10 21:54:47.785450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.194 [2024-12-10 21:54:47.785634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.194 [2024-12-10 21:54:47.785652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:40.194 [2024-12-10 21:54:47.785664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:23:40.194 [2024-12-10 21:54:47.785683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.194 [2024-12-10 21:54:47.785743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.194 [2024-12-10 21:54:47.785758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:40.194 [2024-12-10 21:54:47.785768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:40.194 [2024-12-10 21:54:47.785785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.194 [2024-12-10 21:54:47.785854] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:40.194 [2024-12-10 21:54:47.791936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.194 [2024-12-10 21:54:47.792122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:40.194 [2024-12-10 21:54:47.792151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.094 ms 00:23:40.194 [2024-12-10 21:54:47.792163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.194 [2024-12-10 21:54:47.792263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.194 [2024-12-10 21:54:47.792294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:40.194 [2024-12-10 21:54:47.792309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:40.194 [2024-12-10 21:54:47.792320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.194 [2024-12-10 21:54:47.792375] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:40.194 [2024-12-10 21:54:47.792516] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:40.194 [2024-12-10 21:54:47.792535] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:40.194 [2024-12-10 21:54:47.792550] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:40.194 [2024-12-10 21:54:47.792566] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:40.194 [2024-12-10 21:54:47.792579] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:40.194 [2024-12-10 21:54:47.792593] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:40.194 [2024-12-10 21:54:47.792604] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:40.194 [2024-12-10 21:54:47.792618] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:40.194 [2024-12-10 21:54:47.792630] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:40.194 [2024-12-10 21:54:47.792644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.194 [2024-12-10 21:54:47.792655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:40.194 [2024-12-10 21:54:47.792668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:23:40.194 [2024-12-10 21:54:47.792678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.194 [2024-12-10 21:54:47.792788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.194 [2024-12-10 21:54:47.792800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:40.194 [2024-12-10 21:54:47.792813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:23:40.194 [2024-12-10 21:54:47.792824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.194 [2024-12-10 21:54:47.793002] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:40.194 [2024-12-10 21:54:47.793016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:40.194 [2024-12-10 21:54:47.793029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:40.194 [2024-12-10 21:54:47.793040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.194 [2024-12-10 21:54:47.793065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:40.194 [2024-12-10 21:54:47.793075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:40.194 [2024-12-10 21:54:47.793087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:40.194 [2024-12-10 21:54:47.793097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:40.194 [2024-12-10 21:54:47.793109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:40.194 [2024-12-10 21:54:47.793118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:40.194 [2024-12-10 21:54:47.793132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:40.194 [2024-12-10 21:54:47.793141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:40.194 [2024-12-10 21:54:47.793155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:40.194 [2024-12-10 21:54:47.793165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:40.194 [2024-12-10 21:54:47.793176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:40.194 [2024-12-10 21:54:47.793186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.194 [2024-12-10 21:54:47.793200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:40.194 [2024-12-10 21:54:47.793211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:40.194 [2024-12-10 21:54:47.793223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.194 [2024-12-10 21:54:47.793233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:40.194 [2024-12-10 21:54:47.793245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:40.194 [2024-12-10 21:54:47.793255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:40.194 [2024-12-10 21:54:47.793267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:40.194 [2024-12-10 21:54:47.793277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:40.194 [2024-12-10 21:54:47.793289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:40.194 [2024-12-10 21:54:47.793298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:40.195 [2024-12-10 21:54:47.793311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:40.195 [2024-12-10 21:54:47.793320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:40.195 [2024-12-10 21:54:47.793332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:40.195 [2024-12-10 21:54:47.793342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:40.195 [2024-12-10 21:54:47.793354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:40.195 [2024-12-10 21:54:47.793363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:40.195 [2024-12-10 21:54:47.793377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:40.195 [2024-12-10 21:54:47.793387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:40.195 [2024-12-10 21:54:47.793399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:40.195 [2024-12-10 21:54:47.793408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:40.195 [2024-12-10 21:54:47.793421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:40.195 [2024-12-10 21:54:47.793431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:40.195 [2024-12-10 21:54:47.793443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:40.195 [2024-12-10 21:54:47.793452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.195 [2024-12-10 21:54:47.793464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:40.195 [2024-12-10 21:54:47.793473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:40.195 [2024-12-10 21:54:47.793484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.195 [2024-12-10 21:54:47.793494] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:40.195 [2024-12-10 21:54:47.793507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:40.195 [2024-12-10 21:54:47.793529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:40.195 [2024-12-10 21:54:47.793541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.195 [2024-12-10 21:54:47.793552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:40.195 [2024-12-10 21:54:47.793566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:40.195 [2024-12-10 21:54:47.793579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:40.195 [2024-12-10 21:54:47.793591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:40.195 [2024-12-10 21:54:47.793601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:40.195 [2024-12-10 21:54:47.793613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:40.195 [2024-12-10 21:54:47.793623] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:40.195 [2024-12-10 21:54:47.793638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:40.195 [2024-12-10 21:54:47.793653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:40.195 [2024-12-10 21:54:47.793665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:40.195 [2024-12-10 21:54:47.793676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:40.195 [2024-12-10 21:54:47.793699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:40.195 [2024-12-10 21:54:47.793711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:40.195 [2024-12-10 21:54:47.793726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:40.195 [2024-12-10 21:54:47.793737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:40.195 [2024-12-10 21:54:47.793755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:40.195 [2024-12-10 21:54:47.793765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:40.195 [2024-12-10 21:54:47.793785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:40.195 [2024-12-10 21:54:47.793796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:40.195 [2024-12-10 21:54:47.793811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:40.195 [2024-12-10 21:54:47.793822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:40.195 [2024-12-10 21:54:47.793837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:40.195 [2024-12-10 21:54:47.793847] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:40.195 [2024-12-10 21:54:47.793868] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:40.195 [2024-12-10 21:54:47.793880] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:40.195 [2024-12-10 21:54:47.793894] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:40.195 [2024-12-10 21:54:47.793905] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:40.195 [2024-12-10 21:54:47.793920] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:40.195 [2024-12-10 21:54:47.793932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.195 [2024-12-10 21:54:47.793958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:40.195 [2024-12-10 21:54:47.793969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.990 ms 00:23:40.195 [2024-12-10 21:54:47.793982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.195 [2024-12-10 21:54:47.794129] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:40.195 [2024-12-10 21:54:47.794149] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:44.384 [2024-12-10 21:54:51.506114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.384 [2024-12-10 21:54:51.506445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:44.384 [2024-12-10 21:54:51.506548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3718.011 ms 00:23:44.384 [2024-12-10 21:54:51.506593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.384 [2024-12-10 21:54:51.549837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.384 [2024-12-10 21:54:51.550139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:44.384 [2024-12-10 21:54:51.550250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.938 ms 00:23:44.384 [2024-12-10 21:54:51.550296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.384 [2024-12-10 21:54:51.550500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.384 [2024-12-10 21:54:51.550599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:44.384 [2024-12-10 21:54:51.550692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:23:44.384 [2024-12-10 21:54:51.550731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.384 [2024-12-10 21:54:51.611532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.384 [2024-12-10 21:54:51.611749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:44.384 [2024-12-10 21:54:51.611886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.800 ms 00:23:44.384 [2024-12-10 21:54:51.611932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.384 [2024-12-10 21:54:51.612137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.384 [2024-12-10 21:54:51.612230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:44.384 [2024-12-10 21:54:51.612298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:44.384 [2024-12-10 21:54:51.612334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.384 [2024-12-10 21:54:51.613161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.384 [2024-12-10 21:54:51.613302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:44.384 [2024-12-10 21:54:51.613389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.746 ms 00:23:44.384 [2024-12-10 21:54:51.613430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.384 [2024-12-10 21:54:51.613604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.384 [2024-12-10 21:54:51.613642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:44.384 [2024-12-10 21:54:51.613742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:23:44.384 [2024-12-10 21:54:51.613787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.384 [2024-12-10 21:54:51.637476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.384 [2024-12-10 21:54:51.637645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:44.384 [2024-12-10 21:54:51.637800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.651 ms 00:23:44.384 [2024-12-10 21:54:51.637843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.384 [2024-12-10 21:54:51.652026] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:44.384 [2024-12-10 21:54:51.676975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.384 [2024-12-10 21:54:51.677215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:44.384 [2024-12-10 21:54:51.677354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.947 ms 00:23:44.384 [2024-12-10 21:54:51.677393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.384 [2024-12-10 21:54:51.791386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.384 [2024-12-10 21:54:51.791633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:44.384 [2024-12-10 21:54:51.791720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 114.012 ms 00:23:44.384 [2024-12-10 21:54:51.791758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.384 [2024-12-10 21:54:51.792146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.384 [2024-12-10 21:54:51.792201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:44.384 [2024-12-10 21:54:51.792292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:23:44.385 [2024-12-10 21:54:51.792329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.385 [2024-12-10 21:54:51.829524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.385 [2024-12-10 21:54:51.829681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:44.385 [2024-12-10 21:54:51.829804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.171 ms 00:23:44.385 [2024-12-10 21:54:51.829843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.385 [2024-12-10 21:54:51.865546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.385 [2024-12-10 21:54:51.865719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:44.385 [2024-12-10 21:54:51.865823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.619 ms 00:23:44.385 [2024-12-10 21:54:51.865857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.385 [2024-12-10 21:54:51.866785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.385 [2024-12-10 21:54:51.866919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:44.385 [2024-12-10 21:54:51.867000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.758 ms 00:23:44.385 [2024-12-10 21:54:51.867037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.385 [2024-12-10 21:54:51.971325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.385 [2024-12-10 21:54:51.971517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:44.385 [2024-12-10 21:54:51.971635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.344 ms 00:23:44.385 [2024-12-10 21:54:51.971675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.385 [2024-12-10 21:54:52.009890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.385 [2024-12-10 21:54:52.010059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:44.385 [2024-12-10 21:54:52.010103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.085 ms 00:23:44.385 [2024-12-10 21:54:52.010115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.385 [2024-12-10 21:54:52.046899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.385 [2024-12-10 21:54:52.046941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:44.385 [2024-12-10 21:54:52.046958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.728 ms 00:23:44.385 [2024-12-10 21:54:52.046969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.385 [2024-12-10 21:54:52.083848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.385 [2024-12-10 21:54:52.083903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:44.385 [2024-12-10 21:54:52.083920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.820 ms 00:23:44.385 [2024-12-10 21:54:52.083948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.385 [2024-12-10 21:54:52.084081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.385 [2024-12-10 21:54:52.084100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:44.385 [2024-12-10 21:54:52.084117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:44.385 [2024-12-10 21:54:52.084128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.385 [2024-12-10 21:54:52.084265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.385 [2024-12-10 21:54:52.084277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:44.385 [2024-12-10 21:54:52.084291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:23:44.385 [2024-12-10 21:54:52.084302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.385 [2024-12-10 21:54:52.085745] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:44.385 [2024-12-10 21:54:52.089888] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4324.244 ms, result 0 00:23:44.385 [2024-12-10 21:54:52.090996] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:44.385 { 00:23:44.385 "name": "ftl0", 00:23:44.385 "uuid": "c1c8a73d-fd96-4644-ab7c-747f40be9c54" 00:23:44.385 } 00:23:44.643 21:54:52 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:23:44.644 21:54:52 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:23:44.644 21:54:52 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:44.644 21:54:52 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:23:44.644 21:54:52 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:44.644 21:54:52 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:44.644 21:54:52 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:23:44.644 21:54:52 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:23:44.902 [ 00:23:44.902 { 00:23:44.902 "name": "ftl0", 00:23:44.902 "aliases": [ 00:23:44.902 "c1c8a73d-fd96-4644-ab7c-747f40be9c54" 00:23:44.902 ], 00:23:44.902 "product_name": "FTL disk", 00:23:44.902 "block_size": 4096, 00:23:44.902 "num_blocks": 23592960, 00:23:44.902 "uuid": "c1c8a73d-fd96-4644-ab7c-747f40be9c54", 00:23:44.902 "assigned_rate_limits": { 00:23:44.902 "rw_ios_per_sec": 0, 00:23:44.902 "rw_mbytes_per_sec": 0, 00:23:44.902 "r_mbytes_per_sec": 0, 00:23:44.902 "w_mbytes_per_sec": 0 00:23:44.902 }, 00:23:44.902 "claimed": false, 00:23:44.902 "zoned": false, 00:23:44.902 "supported_io_types": { 00:23:44.902 "read": true, 00:23:44.902 "write": true, 00:23:44.902 "unmap": true, 00:23:44.902 "flush": true, 00:23:44.902 "reset": false, 00:23:44.902 "nvme_admin": false, 00:23:44.902 "nvme_io": false, 00:23:44.902 "nvme_io_md": false, 00:23:44.902 "write_zeroes": true, 00:23:44.902 "zcopy": false, 00:23:44.902 "get_zone_info": false, 00:23:44.902 "zone_management": false, 00:23:44.902 "zone_append": false, 00:23:44.902 "compare": false, 00:23:44.902 "compare_and_write": false, 00:23:44.902 "abort": false, 00:23:44.902 "seek_hole": false, 00:23:44.902 "seek_data": false, 00:23:44.902 "copy": false, 00:23:44.902 "nvme_iov_md": false 00:23:44.902 }, 00:23:44.902 "driver_specific": { 00:23:44.902 "ftl": { 00:23:44.902 "base_bdev": "cc33b0c2-0495-4708-b885-b82189210327", 00:23:44.902 "cache": "nvc0n1p0" 00:23:44.902 } 00:23:44.902 } 00:23:44.902 } 00:23:44.902 ] 00:23:44.902 21:54:52 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:23:44.902 21:54:52 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:23:44.902 21:54:52 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:45.161 21:54:52 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:23:45.161 21:54:52 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:23:45.420 21:54:52 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:23:45.420 { 00:23:45.420 "name": "ftl0", 00:23:45.420 "aliases": [ 00:23:45.420 "c1c8a73d-fd96-4644-ab7c-747f40be9c54" 00:23:45.420 ], 00:23:45.420 "product_name": "FTL disk", 00:23:45.420 "block_size": 4096, 00:23:45.420 "num_blocks": 23592960, 00:23:45.420 "uuid": "c1c8a73d-fd96-4644-ab7c-747f40be9c54", 00:23:45.420 "assigned_rate_limits": { 00:23:45.420 "rw_ios_per_sec": 0, 00:23:45.420 "rw_mbytes_per_sec": 0, 00:23:45.420 "r_mbytes_per_sec": 0, 00:23:45.420 "w_mbytes_per_sec": 0 00:23:45.420 }, 00:23:45.420 "claimed": false, 00:23:45.420 "zoned": false, 00:23:45.420 "supported_io_types": { 00:23:45.420 "read": true, 00:23:45.420 "write": true, 00:23:45.420 "unmap": true, 00:23:45.420 "flush": true, 00:23:45.420 "reset": false, 00:23:45.420 "nvme_admin": false, 00:23:45.420 "nvme_io": false, 00:23:45.420 "nvme_io_md": false, 00:23:45.420 "write_zeroes": true, 00:23:45.420 "zcopy": false, 00:23:45.420 "get_zone_info": false, 00:23:45.420 "zone_management": false, 00:23:45.420 "zone_append": false, 00:23:45.420 "compare": false, 00:23:45.420 "compare_and_write": false, 00:23:45.420 "abort": false, 00:23:45.420 "seek_hole": false, 00:23:45.420 "seek_data": false, 00:23:45.420 "copy": false, 00:23:45.420 "nvme_iov_md": false 00:23:45.420 }, 00:23:45.420 "driver_specific": { 00:23:45.420 "ftl": { 00:23:45.420 "base_bdev": "cc33b0c2-0495-4708-b885-b82189210327", 00:23:45.420 "cache": "nvc0n1p0" 00:23:45.420 } 00:23:45.420 } 00:23:45.420 } 00:23:45.420 ]' 00:23:45.420 21:54:52 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:23:45.420 21:54:52 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:23:45.420 21:54:52 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:45.680 [2024-12-10 21:54:53.165192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.680 [2024-12-10 21:54:53.165245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:45.680 [2024-12-10 21:54:53.165282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:45.680 [2024-12-10 21:54:53.165298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.680 [2024-12-10 21:54:53.165380] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:45.680 [2024-12-10 21:54:53.170030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.680 [2024-12-10 21:54:53.170194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:45.680 [2024-12-10 21:54:53.170228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.631 ms 00:23:45.680 [2024-12-10 21:54:53.170240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.680 [2024-12-10 21:54:53.171320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.680 [2024-12-10 21:54:53.171339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:45.680 [2024-12-10 21:54:53.171353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.980 ms 00:23:45.680 [2024-12-10 21:54:53.171364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.680 [2024-12-10 21:54:53.174246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.680 [2024-12-10 21:54:53.174272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:45.680 [2024-12-10 21:54:53.174287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.824 ms 00:23:45.680 [2024-12-10 21:54:53.174298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.680 [2024-12-10 21:54:53.179988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.680 [2024-12-10 21:54:53.180022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:45.680 [2024-12-10 21:54:53.180037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.616 ms 00:23:45.680 [2024-12-10 21:54:53.180077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.680 [2024-12-10 21:54:53.216656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.680 [2024-12-10 21:54:53.216695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:45.680 [2024-12-10 21:54:53.216715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.464 ms 00:23:45.680 [2024-12-10 21:54:53.216743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.680 [2024-12-10 21:54:53.239420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.680 [2024-12-10 21:54:53.239462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:45.680 [2024-12-10 21:54:53.239481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.597 ms 00:23:45.680 [2024-12-10 21:54:53.239513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.680 [2024-12-10 21:54:53.239869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.680 [2024-12-10 21:54:53.239884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:45.680 [2024-12-10 21:54:53.239899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:23:45.680 [2024-12-10 21:54:53.239909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.680 [2024-12-10 21:54:53.275727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.680 [2024-12-10 21:54:53.275904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:45.680 [2024-12-10 21:54:53.275931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.815 ms 00:23:45.680 [2024-12-10 21:54:53.275941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.680 [2024-12-10 21:54:53.310990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.680 [2024-12-10 21:54:53.311179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:45.680 [2024-12-10 21:54:53.311209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.908 ms 00:23:45.680 [2024-12-10 21:54:53.311220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.680 [2024-12-10 21:54:53.346055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.680 [2024-12-10 21:54:53.346088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:45.680 [2024-12-10 21:54:53.346104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.773 ms 00:23:45.680 [2024-12-10 21:54:53.346130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.680 [2024-12-10 21:54:53.380843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.680 [2024-12-10 21:54:53.380879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:45.680 [2024-12-10 21:54:53.380894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.555 ms 00:23:45.680 [2024-12-10 21:54:53.380920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.680 [2024-12-10 21:54:53.381022] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:45.680 [2024-12-10 21:54:53.381042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:45.680 [2024-12-10 21:54:53.381079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:45.680 [2024-12-10 21:54:53.381091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:45.680 [2024-12-10 21:54:53.381105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:45.680 [2024-12-10 21:54:53.381115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:45.680 [2024-12-10 21:54:53.381133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:45.680 [2024-12-10 21:54:53.381144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:45.680 [2024-12-10 21:54:53.381158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:45.680 [2024-12-10 21:54:53.381186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:45.680 [2024-12-10 21:54:53.381200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:45.680 [2024-12-10 21:54:53.381212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:45.680 [2024-12-10 21:54:53.381243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:45.680 [2024-12-10 21:54:53.381254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:45.680 [2024-12-10 21:54:53.381268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:45.680 [2024-12-10 21:54:53.381280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:45.680 [2024-12-10 21:54:53.381293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:45.680 [2024-12-10 21:54:53.381304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:45.680 [2024-12-10 21:54:53.381319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:45.680 [2024-12-10 21:54:53.381330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:45.680 [2024-12-10 21:54:53.381364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:45.680 [2024-12-10 21:54:53.381375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:45.680 [2024-12-10 21:54:53.381392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:45.680 [2024-12-10 21:54:53.381403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:45.680 [2024-12-10 21:54:53.381417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.381983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.382000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.382011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.382027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.382038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.382061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.382072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.382087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.382098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.382112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.382123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.382137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.382149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.382163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.382176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.382190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.382201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.382225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.382236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.382250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.382261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.382275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.382285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.382300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.382311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.382324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.382335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.382349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.382375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.382389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.382400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.382418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:45.681 [2024-12-10 21:54:53.382437] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:45.681 [2024-12-10 21:54:53.382453] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c1c8a73d-fd96-4644-ab7c-747f40be9c54 00:23:45.681 [2024-12-10 21:54:53.382465] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:45.681 [2024-12-10 21:54:53.382477] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:45.681 [2024-12-10 21:54:53.382487] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:45.681 [2024-12-10 21:54:53.382504] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:45.681 [2024-12-10 21:54:53.382514] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:45.681 [2024-12-10 21:54:53.382527] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:45.681 [2024-12-10 21:54:53.382552] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:45.681 [2024-12-10 21:54:53.382565] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:45.681 [2024-12-10 21:54:53.382574] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:45.681 [2024-12-10 21:54:53.382588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.682 [2024-12-10 21:54:53.382599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:45.682 [2024-12-10 21:54:53.382613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.570 ms 00:23:45.682 [2024-12-10 21:54:53.382624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.682 [2024-12-10 21:54:53.402981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.682 [2024-12-10 21:54:53.403021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:45.682 [2024-12-10 21:54:53.403039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.334 ms 00:23:45.682 [2024-12-10 21:54:53.403083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.682 [2024-12-10 21:54:53.403766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.682 [2024-12-10 21:54:53.403786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:45.682 [2024-12-10 21:54:53.403800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:23:45.682 [2024-12-10 21:54:53.403811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.940 [2024-12-10 21:54:53.473217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:45.940 [2024-12-10 21:54:53.473256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:45.941 [2024-12-10 21:54:53.473273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:45.941 [2024-12-10 21:54:53.473284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.941 [2024-12-10 21:54:53.473443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:45.941 [2024-12-10 21:54:53.473456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:45.941 [2024-12-10 21:54:53.473471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:45.941 [2024-12-10 21:54:53.473482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.941 [2024-12-10 21:54:53.473569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:45.941 [2024-12-10 21:54:53.473583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:45.941 [2024-12-10 21:54:53.473603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:45.941 [2024-12-10 21:54:53.473613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.941 [2024-12-10 21:54:53.473673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:45.941 [2024-12-10 21:54:53.473684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:45.941 [2024-12-10 21:54:53.473697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:45.941 [2024-12-10 21:54:53.473707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.941 [2024-12-10 21:54:53.603288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:45.941 [2024-12-10 21:54:53.603350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:45.941 [2024-12-10 21:54:53.603369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:45.941 [2024-12-10 21:54:53.603380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.200 [2024-12-10 21:54:53.703507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.200 [2024-12-10 21:54:53.703789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:46.200 [2024-12-10 21:54:53.703818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.200 [2024-12-10 21:54:53.703830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.200 [2024-12-10 21:54:53.703997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.200 [2024-12-10 21:54:53.704011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:46.200 [2024-12-10 21:54:53.704029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.200 [2024-12-10 21:54:53.704044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.200 [2024-12-10 21:54:53.704173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.200 [2024-12-10 21:54:53.704185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:46.200 [2024-12-10 21:54:53.704199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.200 [2024-12-10 21:54:53.704210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.200 [2024-12-10 21:54:53.704374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.200 [2024-12-10 21:54:53.704387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:46.200 [2024-12-10 21:54:53.704401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.200 [2024-12-10 21:54:53.704416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.200 [2024-12-10 21:54:53.704505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.200 [2024-12-10 21:54:53.704518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:46.200 [2024-12-10 21:54:53.704532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.200 [2024-12-10 21:54:53.704543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.200 [2024-12-10 21:54:53.704633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.200 [2024-12-10 21:54:53.704645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:46.200 [2024-12-10 21:54:53.704662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.200 [2024-12-10 21:54:53.704673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.200 [2024-12-10 21:54:53.704762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.200 [2024-12-10 21:54:53.704774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:46.200 [2024-12-10 21:54:53.704788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.200 [2024-12-10 21:54:53.704798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.200 [2024-12-10 21:54:53.705098] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 540.736 ms, result 0 00:23:46.200 true 00:23:46.200 21:54:53 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 79579 00:23:46.200 21:54:53 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79579 ']' 00:23:46.200 21:54:53 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79579 00:23:46.200 21:54:53 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:23:46.200 21:54:53 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:46.200 21:54:53 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79579 00:23:46.200 killing process with pid 79579 00:23:46.200 21:54:53 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:46.200 21:54:53 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:46.200 21:54:53 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79579' 00:23:46.200 21:54:53 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 79579 00:23:46.200 21:54:53 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 79579 00:23:51.468 21:54:58 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:23:52.037 65536+0 records in 00:23:52.037 65536+0 records out 00:23:52.037 268435456 bytes (268 MB, 256 MiB) copied, 0.971967 s, 276 MB/s 00:23:52.037 21:54:59 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:52.037 [2024-12-10 21:54:59.681193] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:23:52.037 [2024-12-10 21:54:59.681346] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79785 ] 00:23:52.295 [2024-12-10 21:54:59.862121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.296 [2024-12-10 21:54:59.968381] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.865 [2024-12-10 21:55:00.345724] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:52.865 [2024-12-10 21:55:00.345798] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:52.865 [2024-12-10 21:55:00.511901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.865 [2024-12-10 21:55:00.511954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:52.865 [2024-12-10 21:55:00.511969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:52.865 [2024-12-10 21:55:00.511979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.865 [2024-12-10 21:55:00.515209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.865 [2024-12-10 21:55:00.515249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:52.865 [2024-12-10 21:55:00.515262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.214 ms 00:23:52.865 [2024-12-10 21:55:00.515288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.865 [2024-12-10 21:55:00.515395] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:52.865 [2024-12-10 21:55:00.516373] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:52.865 [2024-12-10 21:55:00.516410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.865 [2024-12-10 21:55:00.516421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:52.865 [2024-12-10 21:55:00.516433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.023 ms 00:23:52.865 [2024-12-10 21:55:00.516443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.865 [2024-12-10 21:55:00.518240] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:52.865 [2024-12-10 21:55:00.536870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.865 [2024-12-10 21:55:00.536908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:52.865 [2024-12-10 21:55:00.536923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.661 ms 00:23:52.865 [2024-12-10 21:55:00.536933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.865 [2024-12-10 21:55:00.537034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.865 [2024-12-10 21:55:00.537064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:52.865 [2024-12-10 21:55:00.537077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:23:52.865 [2024-12-10 21:55:00.537103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.865 [2024-12-10 21:55:00.545712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.865 [2024-12-10 21:55:00.545742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:52.865 [2024-12-10 21:55:00.545754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.578 ms 00:23:52.865 [2024-12-10 21:55:00.545780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.865 [2024-12-10 21:55:00.545884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.865 [2024-12-10 21:55:00.545899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:52.865 [2024-12-10 21:55:00.545910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:23:52.865 [2024-12-10 21:55:00.545921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.865 [2024-12-10 21:55:00.545954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.865 [2024-12-10 21:55:00.545966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:52.865 [2024-12-10 21:55:00.545976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:52.865 [2024-12-10 21:55:00.545986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.865 [2024-12-10 21:55:00.546008] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:52.865 [2024-12-10 21:55:00.551326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.865 [2024-12-10 21:55:00.551360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:52.865 [2024-12-10 21:55:00.551372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.331 ms 00:23:52.865 [2024-12-10 21:55:00.551398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.865 [2024-12-10 21:55:00.551477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.865 [2024-12-10 21:55:00.551490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:52.865 [2024-12-10 21:55:00.551500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:52.865 [2024-12-10 21:55:00.551510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.865 [2024-12-10 21:55:00.551542] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:52.865 [2024-12-10 21:55:00.551567] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:52.865 [2024-12-10 21:55:00.551601] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:52.865 [2024-12-10 21:55:00.551618] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:52.865 [2024-12-10 21:55:00.551702] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:52.865 [2024-12-10 21:55:00.551716] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:52.865 [2024-12-10 21:55:00.551728] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:52.865 [2024-12-10 21:55:00.551744] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:52.865 [2024-12-10 21:55:00.551755] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:52.865 [2024-12-10 21:55:00.551767] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:52.865 [2024-12-10 21:55:00.551776] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:52.865 [2024-12-10 21:55:00.551786] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:52.865 [2024-12-10 21:55:00.551796] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:52.865 [2024-12-10 21:55:00.551806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.865 [2024-12-10 21:55:00.551815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:52.865 [2024-12-10 21:55:00.551825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:23:52.865 [2024-12-10 21:55:00.551834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.865 [2024-12-10 21:55:00.551905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.865 [2024-12-10 21:55:00.551919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:52.865 [2024-12-10 21:55:00.551929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:23:52.865 [2024-12-10 21:55:00.551938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.865 [2024-12-10 21:55:00.552018] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:52.865 [2024-12-10 21:55:00.552030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:52.865 [2024-12-10 21:55:00.552041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:52.866 [2024-12-10 21:55:00.552068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:52.866 [2024-12-10 21:55:00.552079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:52.866 [2024-12-10 21:55:00.552105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:52.866 [2024-12-10 21:55:00.552115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:52.866 [2024-12-10 21:55:00.552126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:52.866 [2024-12-10 21:55:00.552136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:52.866 [2024-12-10 21:55:00.552145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:52.866 [2024-12-10 21:55:00.552155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:52.866 [2024-12-10 21:55:00.552195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:52.866 [2024-12-10 21:55:00.552205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:52.866 [2024-12-10 21:55:00.552214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:52.866 [2024-12-10 21:55:00.552224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:52.866 [2024-12-10 21:55:00.552233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:52.866 [2024-12-10 21:55:00.552243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:52.866 [2024-12-10 21:55:00.552252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:52.866 [2024-12-10 21:55:00.552262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:52.866 [2024-12-10 21:55:00.552271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:52.866 [2024-12-10 21:55:00.552281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:52.866 [2024-12-10 21:55:00.552289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:52.866 [2024-12-10 21:55:00.552299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:52.866 [2024-12-10 21:55:00.552308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:52.866 [2024-12-10 21:55:00.552318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:52.866 [2024-12-10 21:55:00.552327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:52.866 [2024-12-10 21:55:00.552336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:52.866 [2024-12-10 21:55:00.552347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:52.866 [2024-12-10 21:55:00.552356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:52.866 [2024-12-10 21:55:00.552365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:52.866 [2024-12-10 21:55:00.552373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:52.866 [2024-12-10 21:55:00.552382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:52.866 [2024-12-10 21:55:00.552392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:52.866 [2024-12-10 21:55:00.552401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:52.866 [2024-12-10 21:55:00.552409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:52.866 [2024-12-10 21:55:00.552418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:52.866 [2024-12-10 21:55:00.552426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:52.866 [2024-12-10 21:55:00.552435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:52.866 [2024-12-10 21:55:00.552444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:52.866 [2024-12-10 21:55:00.552453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:52.866 [2024-12-10 21:55:00.552462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:52.866 [2024-12-10 21:55:00.552471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:52.866 [2024-12-10 21:55:00.552481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:52.866 [2024-12-10 21:55:00.552490] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:52.866 [2024-12-10 21:55:00.552500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:52.866 [2024-12-10 21:55:00.552512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:52.866 [2024-12-10 21:55:00.552522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:52.866 [2024-12-10 21:55:00.552532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:52.866 [2024-12-10 21:55:00.552541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:52.866 [2024-12-10 21:55:00.552550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:52.866 [2024-12-10 21:55:00.552559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:52.866 [2024-12-10 21:55:00.552568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:52.866 [2024-12-10 21:55:00.552577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:52.866 [2024-12-10 21:55:00.552604] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:52.866 [2024-12-10 21:55:00.552617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:52.866 [2024-12-10 21:55:00.552628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:52.866 [2024-12-10 21:55:00.552639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:52.866 [2024-12-10 21:55:00.552649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:52.866 [2024-12-10 21:55:00.552660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:52.866 [2024-12-10 21:55:00.552670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:52.866 [2024-12-10 21:55:00.552681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:52.866 [2024-12-10 21:55:00.552691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:52.866 [2024-12-10 21:55:00.552701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:52.866 [2024-12-10 21:55:00.552711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:52.866 [2024-12-10 21:55:00.552726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:52.866 [2024-12-10 21:55:00.552736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:52.866 [2024-12-10 21:55:00.552746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:52.866 [2024-12-10 21:55:00.552758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:52.866 [2024-12-10 21:55:00.552768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:52.866 [2024-12-10 21:55:00.552778] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:52.866 [2024-12-10 21:55:00.552790] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:52.866 [2024-12-10 21:55:00.552801] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:52.866 [2024-12-10 21:55:00.552812] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:52.866 [2024-12-10 21:55:00.552822] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:52.866 [2024-12-10 21:55:00.552834] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:52.866 [2024-12-10 21:55:00.552846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.866 [2024-12-10 21:55:00.552860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:52.866 [2024-12-10 21:55:00.552870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.879 ms 00:23:52.866 [2024-12-10 21:55:00.552880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.866 [2024-12-10 21:55:00.592743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.866 [2024-12-10 21:55:00.592783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:52.866 [2024-12-10 21:55:00.592797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.869 ms 00:23:52.866 [2024-12-10 21:55:00.592824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.866 [2024-12-10 21:55:00.592946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.866 [2024-12-10 21:55:00.592960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:52.866 [2024-12-10 21:55:00.592973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:23:52.866 [2024-12-10 21:55:00.592983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.126 [2024-12-10 21:55:00.650439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.126 [2024-12-10 21:55:00.650481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:53.126 [2024-12-10 21:55:00.650500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.526 ms 00:23:53.126 [2024-12-10 21:55:00.650510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.126 [2024-12-10 21:55:00.650610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.126 [2024-12-10 21:55:00.650623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:53.126 [2024-12-10 21:55:00.650635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:53.126 [2024-12-10 21:55:00.650646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.126 [2024-12-10 21:55:00.651122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.126 [2024-12-10 21:55:00.651144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:53.126 [2024-12-10 21:55:00.651155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.455 ms 00:23:53.126 [2024-12-10 21:55:00.651170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.126 [2024-12-10 21:55:00.651288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.126 [2024-12-10 21:55:00.651301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:53.126 [2024-12-10 21:55:00.651311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:23:53.126 [2024-12-10 21:55:00.651321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.126 [2024-12-10 21:55:00.670676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.126 [2024-12-10 21:55:00.670710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:53.126 [2024-12-10 21:55:00.670724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.362 ms 00:23:53.126 [2024-12-10 21:55:00.670752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.126 [2024-12-10 21:55:00.689315] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:23:53.126 [2024-12-10 21:55:00.689356] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:53.126 [2024-12-10 21:55:00.689372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.126 [2024-12-10 21:55:00.689383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:53.126 [2024-12-10 21:55:00.689395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.541 ms 00:23:53.126 [2024-12-10 21:55:00.689404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.126 [2024-12-10 21:55:00.717829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.126 [2024-12-10 21:55:00.717867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:53.126 [2024-12-10 21:55:00.717881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.389 ms 00:23:53.126 [2024-12-10 21:55:00.717891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.126 [2024-12-10 21:55:00.735608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.126 [2024-12-10 21:55:00.735656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:53.126 [2024-12-10 21:55:00.735670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.664 ms 00:23:53.126 [2024-12-10 21:55:00.735679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.126 [2024-12-10 21:55:00.752664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.126 [2024-12-10 21:55:00.752701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:53.126 [2024-12-10 21:55:00.752714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.935 ms 00:23:53.126 [2024-12-10 21:55:00.752723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.126 [2024-12-10 21:55:00.753519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.126 [2024-12-10 21:55:00.753551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:53.126 [2024-12-10 21:55:00.753563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.683 ms 00:23:53.126 [2024-12-10 21:55:00.753574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.126 [2024-12-10 21:55:00.836406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.126 [2024-12-10 21:55:00.836474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:53.127 [2024-12-10 21:55:00.836492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.936 ms 00:23:53.127 [2024-12-10 21:55:00.836520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.127 [2024-12-10 21:55:00.846699] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:53.386 [2024-12-10 21:55:00.862563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.386 [2024-12-10 21:55:00.862615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:53.386 [2024-12-10 21:55:00.862632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.991 ms 00:23:53.386 [2024-12-10 21:55:00.862660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.386 [2024-12-10 21:55:00.862798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.386 [2024-12-10 21:55:00.862812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:53.386 [2024-12-10 21:55:00.862824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:53.386 [2024-12-10 21:55:00.862834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.386 [2024-12-10 21:55:00.862892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.386 [2024-12-10 21:55:00.862905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:53.386 [2024-12-10 21:55:00.862916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:53.386 [2024-12-10 21:55:00.862927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.386 [2024-12-10 21:55:00.862962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.386 [2024-12-10 21:55:00.862981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:53.386 [2024-12-10 21:55:00.862992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:53.386 [2024-12-10 21:55:00.863002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.386 [2024-12-10 21:55:00.863042] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:53.386 [2024-12-10 21:55:00.863055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.386 [2024-12-10 21:55:00.863078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:53.386 [2024-12-10 21:55:00.863090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:53.386 [2024-12-10 21:55:00.863100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.386 [2024-12-10 21:55:00.900500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.386 [2024-12-10 21:55:00.900652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:53.386 [2024-12-10 21:55:00.900748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.437 ms 00:23:53.386 [2024-12-10 21:55:00.900786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.386 [2024-12-10 21:55:00.900964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.386 [2024-12-10 21:55:00.901010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:53.386 [2024-12-10 21:55:00.901116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:23:53.386 [2024-12-10 21:55:00.901154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.386 [2024-12-10 21:55:00.902126] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:53.386 [2024-12-10 21:55:00.906386] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 390.538 ms, result 0 00:23:53.386 [2024-12-10 21:55:00.907325] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:53.386 [2024-12-10 21:55:00.925539] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:54.324  [2024-12-10T21:55:02.992Z] Copying: 22/256 [MB] (22 MBps) [2024-12-10T21:55:03.930Z] Copying: 45/256 [MB] (22 MBps) [2024-12-10T21:55:04.952Z] Copying: 67/256 [MB] (22 MBps) [2024-12-10T21:55:06.329Z] Copying: 90/256 [MB] (22 MBps) [2024-12-10T21:55:07.266Z] Copying: 113/256 [MB] (23 MBps) [2024-12-10T21:55:08.202Z] Copying: 134/256 [MB] (21 MBps) [2024-12-10T21:55:09.141Z] Copying: 157/256 [MB] (23 MBps) [2024-12-10T21:55:10.077Z] Copying: 180/256 [MB] (23 MBps) [2024-12-10T21:55:11.012Z] Copying: 203/256 [MB] (23 MBps) [2024-12-10T21:55:11.948Z] Copying: 226/256 [MB] (22 MBps) [2024-12-10T21:55:12.518Z] Copying: 249/256 [MB] (22 MBps) [2024-12-10T21:55:12.518Z] Copying: 256/256 [MB] (average 22 MBps)[2024-12-10 21:55:12.218860] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:04.787 [2024-12-10 21:55:12.233082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.787 [2024-12-10 21:55:12.233122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:04.787 [2024-12-10 21:55:12.233137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:04.787 [2024-12-10 21:55:12.233147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.787 [2024-12-10 21:55:12.233182] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:04.787 [2024-12-10 21:55:12.237658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.787 [2024-12-10 21:55:12.237691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:04.787 [2024-12-10 21:55:12.237703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.468 ms 00:24:04.787 [2024-12-10 21:55:12.237713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.787 [2024-12-10 21:55:12.239522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.787 [2024-12-10 21:55:12.239669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:04.787 [2024-12-10 21:55:12.239755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.788 ms 00:24:04.787 [2024-12-10 21:55:12.239794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.787 [2024-12-10 21:55:12.246221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.787 [2024-12-10 21:55:12.246371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:04.787 [2024-12-10 21:55:12.246513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.392 ms 00:24:04.787 [2024-12-10 21:55:12.246530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.787 [2024-12-10 21:55:12.251937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.787 [2024-12-10 21:55:12.251970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:04.787 [2024-12-10 21:55:12.251982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.301 ms 00:24:04.787 [2024-12-10 21:55:12.251991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.787 [2024-12-10 21:55:12.286646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.787 [2024-12-10 21:55:12.286680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:04.787 [2024-12-10 21:55:12.286694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.675 ms 00:24:04.787 [2024-12-10 21:55:12.286703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.787 [2024-12-10 21:55:12.307185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.787 [2024-12-10 21:55:12.307228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:04.787 [2024-12-10 21:55:12.307245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.461 ms 00:24:04.787 [2024-12-10 21:55:12.307255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.787 [2024-12-10 21:55:12.307383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.787 [2024-12-10 21:55:12.307396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:04.787 [2024-12-10 21:55:12.307407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:24:04.787 [2024-12-10 21:55:12.307428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.787 [2024-12-10 21:55:12.342125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.787 [2024-12-10 21:55:12.342158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:04.787 [2024-12-10 21:55:12.342171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.735 ms 00:24:04.787 [2024-12-10 21:55:12.342197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.787 [2024-12-10 21:55:12.376129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.787 [2024-12-10 21:55:12.376270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:04.787 [2024-12-10 21:55:12.376289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.931 ms 00:24:04.787 [2024-12-10 21:55:12.376315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.787 [2024-12-10 21:55:12.410057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.787 [2024-12-10 21:55:12.410091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:04.787 [2024-12-10 21:55:12.410104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.724 ms 00:24:04.787 [2024-12-10 21:55:12.410113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.787 [2024-12-10 21:55:12.444151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.787 [2024-12-10 21:55:12.444188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:04.787 [2024-12-10 21:55:12.444200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.017 ms 00:24:04.787 [2024-12-10 21:55:12.444208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.787 [2024-12-10 21:55:12.444278] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:04.787 [2024-12-10 21:55:12.444297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:04.787 [2024-12-10 21:55:12.444309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:04.787 [2024-12-10 21:55:12.444320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:04.787 [2024-12-10 21:55:12.444331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:04.787 [2024-12-10 21:55:12.444341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:04.787 [2024-12-10 21:55:12.444351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:04.787 [2024-12-10 21:55:12.444361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:04.787 [2024-12-10 21:55:12.444372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:04.787 [2024-12-10 21:55:12.444382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:04.787 [2024-12-10 21:55:12.444393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:04.787 [2024-12-10 21:55:12.444403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:04.787 [2024-12-10 21:55:12.444413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:04.787 [2024-12-10 21:55:12.444423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:04.787 [2024-12-10 21:55:12.444433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:04.787 [2024-12-10 21:55:12.444443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:04.787 [2024-12-10 21:55:12.444453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:04.787 [2024-12-10 21:55:12.444463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:04.787 [2024-12-10 21:55:12.444473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:04.787 [2024-12-10 21:55:12.444499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:04.787 [2024-12-10 21:55:12.444510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:04.787 [2024-12-10 21:55:12.444520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:04.787 [2024-12-10 21:55:12.444530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:04.787 [2024-12-10 21:55:12.444541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:04.787 [2024-12-10 21:55:12.444551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:04.787 [2024-12-10 21:55:12.444561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:04.787 [2024-12-10 21:55:12.444571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.444581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.444591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.444603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.444613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.444625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.444636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.444647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.444657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.444668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.444678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.444688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.444698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.444710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.444720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.444731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.444742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.444753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.444763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.444773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.444784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.444795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.444806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.444817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.444828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.444839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.444849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.444859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.444870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.444881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.444891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.444901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.444911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.444922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.444933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.444944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.444954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.444965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.444975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.444986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.444996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.445007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.445017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.445027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.445038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.445048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.445058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.445082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.445093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.445103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.445115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.445125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.445136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.445147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.445158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.445169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.445180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.445192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.445202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.445212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.445223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.445234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.445244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.445255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.445265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.445276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.445286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.445297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.445307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.445331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.445341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.445352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.445362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.445373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.445383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:04.788 [2024-12-10 21:55:12.445401] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:04.788 [2024-12-10 21:55:12.445410] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c1c8a73d-fd96-4644-ab7c-747f40be9c54 00:24:04.788 [2024-12-10 21:55:12.445422] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:04.788 [2024-12-10 21:55:12.445431] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:04.788 [2024-12-10 21:55:12.445441] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:04.788 [2024-12-10 21:55:12.445451] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:04.788 [2024-12-10 21:55:12.445461] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:04.788 [2024-12-10 21:55:12.445471] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:04.788 [2024-12-10 21:55:12.445480] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:04.788 [2024-12-10 21:55:12.445490] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:04.788 [2024-12-10 21:55:12.445499] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:04.789 [2024-12-10 21:55:12.445509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.789 [2024-12-10 21:55:12.445524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:04.789 [2024-12-10 21:55:12.445535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.234 ms 00:24:04.789 [2024-12-10 21:55:12.445545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.789 [2024-12-10 21:55:12.464948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.789 [2024-12-10 21:55:12.464981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:04.789 [2024-12-10 21:55:12.464992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.413 ms 00:24:04.789 [2024-12-10 21:55:12.465001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.789 [2024-12-10 21:55:12.465632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.789 [2024-12-10 21:55:12.465656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:04.789 [2024-12-10 21:55:12.465667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:24:04.789 [2024-12-10 21:55:12.465676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.048 [2024-12-10 21:55:12.517917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.048 [2024-12-10 21:55:12.517953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:05.048 [2024-12-10 21:55:12.517966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.048 [2024-12-10 21:55:12.517992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.048 [2024-12-10 21:55:12.518199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.048 [2024-12-10 21:55:12.518262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:05.048 [2024-12-10 21:55:12.518294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.048 [2024-12-10 21:55:12.518306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.048 [2024-12-10 21:55:12.518376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.048 [2024-12-10 21:55:12.518389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:05.048 [2024-12-10 21:55:12.518401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.048 [2024-12-10 21:55:12.518412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.048 [2024-12-10 21:55:12.518431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.048 [2024-12-10 21:55:12.518448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:05.048 [2024-12-10 21:55:12.518459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.048 [2024-12-10 21:55:12.518469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.048 [2024-12-10 21:55:12.640711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.048 [2024-12-10 21:55:12.640912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:05.048 [2024-12-10 21:55:12.640938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.048 [2024-12-10 21:55:12.640951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.048 [2024-12-10 21:55:12.739038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.048 [2024-12-10 21:55:12.739123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:05.048 [2024-12-10 21:55:12.739137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.048 [2024-12-10 21:55:12.739149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.048 [2024-12-10 21:55:12.739231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.048 [2024-12-10 21:55:12.739244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:05.048 [2024-12-10 21:55:12.739255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.048 [2024-12-10 21:55:12.739265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.048 [2024-12-10 21:55:12.739296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.048 [2024-12-10 21:55:12.739308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:05.048 [2024-12-10 21:55:12.739325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.048 [2024-12-10 21:55:12.739335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.048 [2024-12-10 21:55:12.739466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.048 [2024-12-10 21:55:12.739479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:05.048 [2024-12-10 21:55:12.739491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.048 [2024-12-10 21:55:12.739502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.048 [2024-12-10 21:55:12.739541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.048 [2024-12-10 21:55:12.739554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:05.048 [2024-12-10 21:55:12.739565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.048 [2024-12-10 21:55:12.739580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.048 [2024-12-10 21:55:12.739623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.048 [2024-12-10 21:55:12.739634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:05.048 [2024-12-10 21:55:12.739645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.048 [2024-12-10 21:55:12.739655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.048 [2024-12-10 21:55:12.739702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.048 [2024-12-10 21:55:12.739714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:05.048 [2024-12-10 21:55:12.739729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.048 [2024-12-10 21:55:12.739739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.048 [2024-12-10 21:55:12.739889] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 507.634 ms, result 0 00:24:06.427 00:24:06.427 00:24:06.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:06.427 21:55:13 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=79936 00:24:06.427 21:55:13 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 79936 00:24:06.427 21:55:13 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:24:06.427 21:55:13 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 79936 ']' 00:24:06.427 21:55:13 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.427 21:55:13 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:06.427 21:55:13 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.427 21:55:13 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:06.427 21:55:13 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:24:06.427 [2024-12-10 21:55:14.030013] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:24:06.427 [2024-12-10 21:55:14.030392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79936 ] 00:24:06.686 [2024-12-10 21:55:14.214615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.686 [2024-12-10 21:55:14.323304] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.622 21:55:15 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:07.622 21:55:15 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:24:07.622 21:55:15 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:24:07.882 [2024-12-10 21:55:15.376808] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:07.882 [2024-12-10 21:55:15.377088] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:07.882 [2024-12-10 21:55:15.556782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.882 [2024-12-10 21:55:15.556833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:07.882 [2024-12-10 21:55:15.556853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:07.882 [2024-12-10 21:55:15.556863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.882 [2024-12-10 21:55:15.560776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.882 [2024-12-10 21:55:15.560940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:07.882 [2024-12-10 21:55:15.560983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.898 ms 00:24:07.882 [2024-12-10 21:55:15.560995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.882 [2024-12-10 21:55:15.561195] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:07.882 [2024-12-10 21:55:15.562230] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:07.882 [2024-12-10 21:55:15.562265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.882 [2024-12-10 21:55:15.562277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:07.882 [2024-12-10 21:55:15.562291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.086 ms 00:24:07.882 [2024-12-10 21:55:15.562302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.882 [2024-12-10 21:55:15.564121] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:07.882 [2024-12-10 21:55:15.583837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.882 [2024-12-10 21:55:15.583885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:07.882 [2024-12-10 21:55:15.583917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.752 ms 00:24:07.882 [2024-12-10 21:55:15.583934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.882 [2024-12-10 21:55:15.584042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.882 [2024-12-10 21:55:15.584088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:07.882 [2024-12-10 21:55:15.584101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:24:07.882 [2024-12-10 21:55:15.584117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.882 [2024-12-10 21:55:15.591565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.882 [2024-12-10 21:55:15.591609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:07.882 [2024-12-10 21:55:15.591622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.401 ms 00:24:07.882 [2024-12-10 21:55:15.591638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.882 [2024-12-10 21:55:15.591761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.882 [2024-12-10 21:55:15.591779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:07.882 [2024-12-10 21:55:15.591789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:24:07.882 [2024-12-10 21:55:15.591806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.882 [2024-12-10 21:55:15.591833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.882 [2024-12-10 21:55:15.591846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:07.882 [2024-12-10 21:55:15.591856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:07.882 [2024-12-10 21:55:15.591868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.882 [2024-12-10 21:55:15.591892] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:07.882 [2024-12-10 21:55:15.596627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.882 [2024-12-10 21:55:15.596657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:07.882 [2024-12-10 21:55:15.596674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.728 ms 00:24:07.882 [2024-12-10 21:55:15.596684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.882 [2024-12-10 21:55:15.596763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.882 [2024-12-10 21:55:15.596775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:07.882 [2024-12-10 21:55:15.596790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:07.882 [2024-12-10 21:55:15.596806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.882 [2024-12-10 21:55:15.596833] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:07.882 [2024-12-10 21:55:15.596860] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:07.882 [2024-12-10 21:55:15.596913] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:07.883 [2024-12-10 21:55:15.596933] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:07.883 [2024-12-10 21:55:15.597023] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:07.883 [2024-12-10 21:55:15.597037] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:07.883 [2024-12-10 21:55:15.597075] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:07.883 [2024-12-10 21:55:15.597106] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:07.883 [2024-12-10 21:55:15.597123] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:07.883 [2024-12-10 21:55:15.597135] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:07.883 [2024-12-10 21:55:15.597151] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:07.883 [2024-12-10 21:55:15.597161] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:07.883 [2024-12-10 21:55:15.597180] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:07.883 [2024-12-10 21:55:15.597191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.883 [2024-12-10 21:55:15.597205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:07.883 [2024-12-10 21:55:15.597216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:24:07.883 [2024-12-10 21:55:15.597231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.883 [2024-12-10 21:55:15.597310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.883 [2024-12-10 21:55:15.597326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:07.883 [2024-12-10 21:55:15.597337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:24:07.883 [2024-12-10 21:55:15.597351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.883 [2024-12-10 21:55:15.597440] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:07.883 [2024-12-10 21:55:15.597458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:07.883 [2024-12-10 21:55:15.597469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:07.883 [2024-12-10 21:55:15.597485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.883 [2024-12-10 21:55:15.597496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:07.883 [2024-12-10 21:55:15.597511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:07.883 [2024-12-10 21:55:15.597521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:07.883 [2024-12-10 21:55:15.597540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:07.883 [2024-12-10 21:55:15.597549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:07.883 [2024-12-10 21:55:15.597563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:07.883 [2024-12-10 21:55:15.597573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:07.883 [2024-12-10 21:55:15.597587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:07.883 [2024-12-10 21:55:15.597599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:07.883 [2024-12-10 21:55:15.597613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:07.883 [2024-12-10 21:55:15.597623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:07.883 [2024-12-10 21:55:15.597637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.883 [2024-12-10 21:55:15.597647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:07.883 [2024-12-10 21:55:15.597662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:07.883 [2024-12-10 21:55:15.597683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.883 [2024-12-10 21:55:15.597698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:07.883 [2024-12-10 21:55:15.597707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:07.883 [2024-12-10 21:55:15.597722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:07.883 [2024-12-10 21:55:15.597732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:07.883 [2024-12-10 21:55:15.597750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:07.883 [2024-12-10 21:55:15.597760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:07.883 [2024-12-10 21:55:15.597773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:07.883 [2024-12-10 21:55:15.597783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:07.883 [2024-12-10 21:55:15.597797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:07.883 [2024-12-10 21:55:15.597806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:07.883 [2024-12-10 21:55:15.597821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:07.883 [2024-12-10 21:55:15.597831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:07.883 [2024-12-10 21:55:15.597844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:07.883 [2024-12-10 21:55:15.597854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:07.883 [2024-12-10 21:55:15.597867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:07.883 [2024-12-10 21:55:15.597877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:07.883 [2024-12-10 21:55:15.597890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:07.883 [2024-12-10 21:55:15.597899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:07.883 [2024-12-10 21:55:15.597913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:07.883 [2024-12-10 21:55:15.597923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:07.883 [2024-12-10 21:55:15.597941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.883 [2024-12-10 21:55:15.597950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:07.883 [2024-12-10 21:55:15.597965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:07.883 [2024-12-10 21:55:15.597975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.883 [2024-12-10 21:55:15.597989] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:07.883 [2024-12-10 21:55:15.598007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:07.883 [2024-12-10 21:55:15.598023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:07.883 [2024-12-10 21:55:15.598033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.883 [2024-12-10 21:55:15.598057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:07.883 [2024-12-10 21:55:15.598068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:07.883 [2024-12-10 21:55:15.598083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:07.883 [2024-12-10 21:55:15.598093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:07.883 [2024-12-10 21:55:15.598107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:07.883 [2024-12-10 21:55:15.598117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:07.884 [2024-12-10 21:55:15.598148] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:07.884 [2024-12-10 21:55:15.598161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:07.884 [2024-12-10 21:55:15.598185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:07.884 [2024-12-10 21:55:15.598196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:07.884 [2024-12-10 21:55:15.598212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:07.884 [2024-12-10 21:55:15.598223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:07.884 [2024-12-10 21:55:15.598240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:07.884 [2024-12-10 21:55:15.598250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:07.884 [2024-12-10 21:55:15.598265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:07.884 [2024-12-10 21:55:15.598276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:07.884 [2024-12-10 21:55:15.598291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:07.884 [2024-12-10 21:55:15.598302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:07.884 [2024-12-10 21:55:15.598317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:07.884 [2024-12-10 21:55:15.598328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:07.884 [2024-12-10 21:55:15.598343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:07.884 [2024-12-10 21:55:15.598354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:07.884 [2024-12-10 21:55:15.598377] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:07.884 [2024-12-10 21:55:15.598389] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:07.884 [2024-12-10 21:55:15.598409] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:07.884 [2024-12-10 21:55:15.598420] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:07.884 [2024-12-10 21:55:15.598435] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:07.884 [2024-12-10 21:55:15.598446] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:07.884 [2024-12-10 21:55:15.598463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.884 [2024-12-10 21:55:15.598490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:07.884 [2024-12-10 21:55:15.598506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.073 ms 00:24:07.884 [2024-12-10 21:55:15.598522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.143 [2024-12-10 21:55:15.643595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.143 [2024-12-10 21:55:15.643632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:08.143 [2024-12-10 21:55:15.643651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.077 ms 00:24:08.143 [2024-12-10 21:55:15.643667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.143 [2024-12-10 21:55:15.643783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.143 [2024-12-10 21:55:15.643795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:08.143 [2024-12-10 21:55:15.643811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:24:08.143 [2024-12-10 21:55:15.643821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.143 [2024-12-10 21:55:15.694970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.143 [2024-12-10 21:55:15.695014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:08.143 [2024-12-10 21:55:15.695033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.202 ms 00:24:08.143 [2024-12-10 21:55:15.695060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.143 [2024-12-10 21:55:15.695170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.143 [2024-12-10 21:55:15.695185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:08.143 [2024-12-10 21:55:15.695201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:08.143 [2024-12-10 21:55:15.695211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.143 [2024-12-10 21:55:15.695676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.143 [2024-12-10 21:55:15.695694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:08.143 [2024-12-10 21:55:15.695717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:24:08.143 [2024-12-10 21:55:15.695727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.143 [2024-12-10 21:55:15.695855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.143 [2024-12-10 21:55:15.695869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:08.143 [2024-12-10 21:55:15.695886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:24:08.143 [2024-12-10 21:55:15.695896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.143 [2024-12-10 21:55:15.718792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.143 [2024-12-10 21:55:15.718984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:08.143 [2024-12-10 21:55:15.719017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.902 ms 00:24:08.143 [2024-12-10 21:55:15.719029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.143 [2024-12-10 21:55:15.770378] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:24:08.143 [2024-12-10 21:55:15.770422] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:08.143 [2024-12-10 21:55:15.770445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.143 [2024-12-10 21:55:15.770456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:08.143 [2024-12-10 21:55:15.770473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.357 ms 00:24:08.143 [2024-12-10 21:55:15.770497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.143 [2024-12-10 21:55:15.799259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.143 [2024-12-10 21:55:15.799300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:08.143 [2024-12-10 21:55:15.799319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.714 ms 00:24:08.143 [2024-12-10 21:55:15.799330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.144 [2024-12-10 21:55:15.816643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.144 [2024-12-10 21:55:15.816690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:08.144 [2024-12-10 21:55:15.816716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.229 ms 00:24:08.144 [2024-12-10 21:55:15.816725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.144 [2024-12-10 21:55:15.833948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.144 [2024-12-10 21:55:15.834140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:08.144 [2024-12-10 21:55:15.834172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.167 ms 00:24:08.144 [2024-12-10 21:55:15.834183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.144 [2024-12-10 21:55:15.834968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.144 [2024-12-10 21:55:15.834995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:08.144 [2024-12-10 21:55:15.835013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.666 ms 00:24:08.144 [2024-12-10 21:55:15.835024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.403 [2024-12-10 21:55:15.924668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.403 [2024-12-10 21:55:15.924752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:08.403 [2024-12-10 21:55:15.924778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.728 ms 00:24:08.403 [2024-12-10 21:55:15.924789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.403 [2024-12-10 21:55:15.934985] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:08.403 [2024-12-10 21:55:15.958156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.403 [2024-12-10 21:55:15.958219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:08.403 [2024-12-10 21:55:15.958235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.290 ms 00:24:08.403 [2024-12-10 21:55:15.958251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.403 [2024-12-10 21:55:15.958358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.403 [2024-12-10 21:55:15.958386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:08.403 [2024-12-10 21:55:15.958399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:08.403 [2024-12-10 21:55:15.958416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.403 [2024-12-10 21:55:15.958479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.403 [2024-12-10 21:55:15.958498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:08.403 [2024-12-10 21:55:15.958517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:08.403 [2024-12-10 21:55:15.958532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.403 [2024-12-10 21:55:15.958560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.403 [2024-12-10 21:55:15.958580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:08.403 [2024-12-10 21:55:15.958591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:08.403 [2024-12-10 21:55:15.958608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.403 [2024-12-10 21:55:15.958651] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:08.403 [2024-12-10 21:55:15.958682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.403 [2024-12-10 21:55:15.958693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:08.403 [2024-12-10 21:55:15.958709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:24:08.403 [2024-12-10 21:55:15.958725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.403 [2024-12-10 21:55:15.995639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.403 [2024-12-10 21:55:15.995682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:08.403 [2024-12-10 21:55:15.995704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.936 ms 00:24:08.403 [2024-12-10 21:55:15.995715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.403 [2024-12-10 21:55:15.995837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.403 [2024-12-10 21:55:15.995851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:08.403 [2024-12-10 21:55:15.995874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:24:08.403 [2024-12-10 21:55:15.995885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.403 [2024-12-10 21:55:15.996930] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:08.403 [2024-12-10 21:55:16.001137] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 440.477 ms, result 0 00:24:08.403 [2024-12-10 21:55:16.002314] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:08.403 Some configs were skipped because the RPC state that can call them passed over. 00:24:08.403 21:55:16 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:24:08.661 [2024-12-10 21:55:16.257924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.661 [2024-12-10 21:55:16.258144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:24:08.661 [2024-12-10 21:55:16.258300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.613 ms 00:24:08.661 [2024-12-10 21:55:16.258347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.661 [2024-12-10 21:55:16.258439] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.130 ms, result 0 00:24:08.661 true 00:24:08.661 21:55:16 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:24:08.919 [2024-12-10 21:55:16.465398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.919 [2024-12-10 21:55:16.465442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:24:08.919 [2024-12-10 21:55:16.465463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.194 ms 00:24:08.919 [2024-12-10 21:55:16.465474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.919 [2024-12-10 21:55:16.465524] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.321 ms, result 0 00:24:08.919 true 00:24:08.919 21:55:16 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 79936 00:24:08.919 21:55:16 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79936 ']' 00:24:08.919 21:55:16 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79936 00:24:08.919 21:55:16 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:24:08.919 21:55:16 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:08.919 21:55:16 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79936 00:24:08.919 killing process with pid 79936 00:24:08.920 21:55:16 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:08.920 21:55:16 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:08.920 21:55:16 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79936' 00:24:08.920 21:55:16 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 79936 00:24:08.920 21:55:16 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 79936 00:24:10.299 [2024-12-10 21:55:17.609583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.299 [2024-12-10 21:55:17.609651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:10.299 [2024-12-10 21:55:17.609683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:10.299 [2024-12-10 21:55:17.609699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.299 [2024-12-10 21:55:17.609723] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:10.299 [2024-12-10 21:55:17.613983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.299 [2024-12-10 21:55:17.614016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:10.299 [2024-12-10 21:55:17.614034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.244 ms 00:24:10.299 [2024-12-10 21:55:17.614045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.299 [2024-12-10 21:55:17.614315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.299 [2024-12-10 21:55:17.614329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:10.299 [2024-12-10 21:55:17.614342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:24:10.299 [2024-12-10 21:55:17.614352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.299 [2024-12-10 21:55:17.617808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.299 [2024-12-10 21:55:17.617849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:10.299 [2024-12-10 21:55:17.617865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.429 ms 00:24:10.299 [2024-12-10 21:55:17.617875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.299 [2024-12-10 21:55:17.623201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.299 [2024-12-10 21:55:17.623235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:10.299 [2024-12-10 21:55:17.623250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.291 ms 00:24:10.299 [2024-12-10 21:55:17.623259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.299 [2024-12-10 21:55:17.637372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.299 [2024-12-10 21:55:17.637416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:10.299 [2024-12-10 21:55:17.637435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.068 ms 00:24:10.299 [2024-12-10 21:55:17.637444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.299 [2024-12-10 21:55:17.648748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.299 [2024-12-10 21:55:17.648785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:10.299 [2024-12-10 21:55:17.648801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.250 ms 00:24:10.299 [2024-12-10 21:55:17.648827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.299 [2024-12-10 21:55:17.648976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.299 [2024-12-10 21:55:17.648989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:10.299 [2024-12-10 21:55:17.649002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:24:10.299 [2024-12-10 21:55:17.649012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.299 [2024-12-10 21:55:17.664336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.299 [2024-12-10 21:55:17.664508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:10.299 [2024-12-10 21:55:17.664540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.325 ms 00:24:10.299 [2024-12-10 21:55:17.664550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.299 [2024-12-10 21:55:17.679035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.299 [2024-12-10 21:55:17.679076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:10.299 [2024-12-10 21:55:17.679116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.408 ms 00:24:10.299 [2024-12-10 21:55:17.679125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.299 [2024-12-10 21:55:17.692812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.299 [2024-12-10 21:55:17.692847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:10.299 [2024-12-10 21:55:17.692864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.646 ms 00:24:10.299 [2024-12-10 21:55:17.692874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.299 [2024-12-10 21:55:17.706764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.299 [2024-12-10 21:55:17.706798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:10.299 [2024-12-10 21:55:17.706815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.838 ms 00:24:10.299 [2024-12-10 21:55:17.706841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.299 [2024-12-10 21:55:17.706914] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:10.299 [2024-12-10 21:55:17.706932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:10.299 [2024-12-10 21:55:17.706955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:10.299 [2024-12-10 21:55:17.706966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:10.299 [2024-12-10 21:55:17.706983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:10.299 [2024-12-10 21:55:17.706994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:10.299 [2024-12-10 21:55:17.707016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:10.299 [2024-12-10 21:55:17.707027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:10.299 [2024-12-10 21:55:17.707043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:10.299 [2024-12-10 21:55:17.707067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:10.299 [2024-12-10 21:55:17.707084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:10.299 [2024-12-10 21:55:17.707095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:10.299 [2024-12-10 21:55:17.707111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:10.299 [2024-12-10 21:55:17.707121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:10.299 [2024-12-10 21:55:17.707149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:10.299 [2024-12-10 21:55:17.707160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.707990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.708006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.708017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.708033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.708044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.708069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.708080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.708096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.708106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.708123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.708134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.708150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.708162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.708183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.708194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.708211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.708222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.708238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.708250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.708266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.708277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.708293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.708304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.708323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.708335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.708352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.708363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:10.300 [2024-12-10 21:55:17.708380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:10.301 [2024-12-10 21:55:17.708410] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:10.301 [2024-12-10 21:55:17.708439] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c1c8a73d-fd96-4644-ab7c-747f40be9c54 00:24:10.301 [2024-12-10 21:55:17.708451] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:10.301 [2024-12-10 21:55:17.708467] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:10.301 [2024-12-10 21:55:17.708477] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:10.301 [2024-12-10 21:55:17.708493] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:10.301 [2024-12-10 21:55:17.708503] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:10.301 [2024-12-10 21:55:17.708519] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:10.301 [2024-12-10 21:55:17.708529] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:10.301 [2024-12-10 21:55:17.708545] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:10.301 [2024-12-10 21:55:17.708554] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:10.301 [2024-12-10 21:55:17.708569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.301 [2024-12-10 21:55:17.708581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:10.301 [2024-12-10 21:55:17.708597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.661 ms 00:24:10.301 [2024-12-10 21:55:17.708613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.301 [2024-12-10 21:55:17.728398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.301 [2024-12-10 21:55:17.728430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:10.301 [2024-12-10 21:55:17.728453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.786 ms 00:24:10.301 [2024-12-10 21:55:17.728463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.301 [2024-12-10 21:55:17.729019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.301 [2024-12-10 21:55:17.729040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:10.301 [2024-12-10 21:55:17.729070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.503 ms 00:24:10.301 [2024-12-10 21:55:17.729081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.301 [2024-12-10 21:55:17.795943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.301 [2024-12-10 21:55:17.795980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:10.301 [2024-12-10 21:55:17.795995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.301 [2024-12-10 21:55:17.796023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.301 [2024-12-10 21:55:17.796120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.301 [2024-12-10 21:55:17.796135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:10.301 [2024-12-10 21:55:17.796149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.301 [2024-12-10 21:55:17.796160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.301 [2024-12-10 21:55:17.796221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.301 [2024-12-10 21:55:17.796234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:10.301 [2024-12-10 21:55:17.796250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.301 [2024-12-10 21:55:17.796260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.301 [2024-12-10 21:55:17.796283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.301 [2024-12-10 21:55:17.796293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:10.301 [2024-12-10 21:55:17.796306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.301 [2024-12-10 21:55:17.796318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.301 [2024-12-10 21:55:17.915302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.301 [2024-12-10 21:55:17.915361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:10.301 [2024-12-10 21:55:17.915400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.301 [2024-12-10 21:55:17.915411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.301 [2024-12-10 21:55:18.013563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.301 [2024-12-10 21:55:18.013793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:10.301 [2024-12-10 21:55:18.013832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.301 [2024-12-10 21:55:18.013860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.301 [2024-12-10 21:55:18.013959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.301 [2024-12-10 21:55:18.013973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:10.301 [2024-12-10 21:55:18.013995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.301 [2024-12-10 21:55:18.014006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.301 [2024-12-10 21:55:18.014043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.301 [2024-12-10 21:55:18.014072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:10.301 [2024-12-10 21:55:18.014089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.301 [2024-12-10 21:55:18.014099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.301 [2024-12-10 21:55:18.014250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.301 [2024-12-10 21:55:18.014264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:10.301 [2024-12-10 21:55:18.014281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.301 [2024-12-10 21:55:18.014291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.301 [2024-12-10 21:55:18.014341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.301 [2024-12-10 21:55:18.014354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:10.301 [2024-12-10 21:55:18.014379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.301 [2024-12-10 21:55:18.014390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.301 [2024-12-10 21:55:18.014445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.301 [2024-12-10 21:55:18.014457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:10.301 [2024-12-10 21:55:18.014479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.301 [2024-12-10 21:55:18.014489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.301 [2024-12-10 21:55:18.014540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.301 [2024-12-10 21:55:18.014552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:10.301 [2024-12-10 21:55:18.014568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.301 [2024-12-10 21:55:18.014579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.301 [2024-12-10 21:55:18.014745] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 405.781 ms, result 0 00:24:11.679 21:55:19 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:24:11.679 21:55:19 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:11.679 [2024-12-10 21:55:19.123251] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:24:11.679 [2024-12-10 21:55:19.123402] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79995 ] 00:24:11.679 [2024-12-10 21:55:19.304989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.938 [2024-12-10 21:55:19.412167] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.197 [2024-12-10 21:55:19.777313] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:12.197 [2024-12-10 21:55:19.777579] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:12.457 [2024-12-10 21:55:19.940005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.457 [2024-12-10 21:55:19.940074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:12.457 [2024-12-10 21:55:19.940108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:12.457 [2024-12-10 21:55:19.940118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.457 [2024-12-10 21:55:19.943356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.457 [2024-12-10 21:55:19.943396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:12.457 [2024-12-10 21:55:19.943409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.220 ms 00:24:12.457 [2024-12-10 21:55:19.943435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.457 [2024-12-10 21:55:19.943537] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:12.457 [2024-12-10 21:55:19.944487] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:12.457 [2024-12-10 21:55:19.944523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.457 [2024-12-10 21:55:19.944534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:12.457 [2024-12-10 21:55:19.944546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.995 ms 00:24:12.457 [2024-12-10 21:55:19.944556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.457 [2024-12-10 21:55:19.946396] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:12.457 [2024-12-10 21:55:19.965423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.457 [2024-12-10 21:55:19.965461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:12.457 [2024-12-10 21:55:19.965476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.057 ms 00:24:12.457 [2024-12-10 21:55:19.965487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.457 [2024-12-10 21:55:19.965587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.457 [2024-12-10 21:55:19.965601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:12.457 [2024-12-10 21:55:19.965613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:24:12.457 [2024-12-10 21:55:19.965622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.457 [2024-12-10 21:55:19.976176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.457 [2024-12-10 21:55:19.976341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:12.457 [2024-12-10 21:55:19.976378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.531 ms 00:24:12.457 [2024-12-10 21:55:19.976390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.457 [2024-12-10 21:55:19.976515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.457 [2024-12-10 21:55:19.976530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:12.457 [2024-12-10 21:55:19.976542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:24:12.457 [2024-12-10 21:55:19.976556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.457 [2024-12-10 21:55:19.976585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.457 [2024-12-10 21:55:19.976597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:12.457 [2024-12-10 21:55:19.976608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:12.457 [2024-12-10 21:55:19.976618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.457 [2024-12-10 21:55:19.976641] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:12.457 [2024-12-10 21:55:19.981401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.457 [2024-12-10 21:55:19.981436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:12.458 [2024-12-10 21:55:19.981449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.773 ms 00:24:12.458 [2024-12-10 21:55:19.981476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.458 [2024-12-10 21:55:19.981531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.458 [2024-12-10 21:55:19.981544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:12.458 [2024-12-10 21:55:19.981556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:12.458 [2024-12-10 21:55:19.981569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.458 [2024-12-10 21:55:19.981590] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:12.458 [2024-12-10 21:55:19.981615] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:12.458 [2024-12-10 21:55:19.981651] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:12.458 [2024-12-10 21:55:19.981668] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:12.458 [2024-12-10 21:55:19.981756] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:12.458 [2024-12-10 21:55:19.981770] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:12.458 [2024-12-10 21:55:19.981787] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:12.458 [2024-12-10 21:55:19.981800] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:12.458 [2024-12-10 21:55:19.981812] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:12.458 [2024-12-10 21:55:19.981823] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:12.458 [2024-12-10 21:55:19.981833] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:12.458 [2024-12-10 21:55:19.981843] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:12.458 [2024-12-10 21:55:19.981852] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:12.458 [2024-12-10 21:55:19.981863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.458 [2024-12-10 21:55:19.981873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:12.458 [2024-12-10 21:55:19.981883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:24:12.458 [2024-12-10 21:55:19.981894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.458 [2024-12-10 21:55:19.981971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.458 [2024-12-10 21:55:19.981983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:12.458 [2024-12-10 21:55:19.981993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:24:12.458 [2024-12-10 21:55:19.982003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.458 [2024-12-10 21:55:19.982110] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:12.458 [2024-12-10 21:55:19.982124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:12.458 [2024-12-10 21:55:19.982136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:12.458 [2024-12-10 21:55:19.982147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:12.458 [2024-12-10 21:55:19.982159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:12.458 [2024-12-10 21:55:19.982168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:12.458 [2024-12-10 21:55:19.982179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:12.458 [2024-12-10 21:55:19.982189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:12.458 [2024-12-10 21:55:19.982199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:12.458 [2024-12-10 21:55:19.982209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:12.458 [2024-12-10 21:55:19.982218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:12.458 [2024-12-10 21:55:19.982241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:12.458 [2024-12-10 21:55:19.982251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:12.458 [2024-12-10 21:55:19.982260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:12.458 [2024-12-10 21:55:19.982270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:12.458 [2024-12-10 21:55:19.982280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:12.458 [2024-12-10 21:55:19.982289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:12.458 [2024-12-10 21:55:19.982298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:12.458 [2024-12-10 21:55:19.982307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:12.458 [2024-12-10 21:55:19.982316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:12.458 [2024-12-10 21:55:19.982326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:12.458 [2024-12-10 21:55:19.982335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:12.458 [2024-12-10 21:55:19.982344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:12.458 [2024-12-10 21:55:19.982353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:12.458 [2024-12-10 21:55:19.982371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:12.458 [2024-12-10 21:55:19.982381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:12.458 [2024-12-10 21:55:19.982390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:12.458 [2024-12-10 21:55:19.982399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:12.458 [2024-12-10 21:55:19.982408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:12.458 [2024-12-10 21:55:19.982417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:12.458 [2024-12-10 21:55:19.982426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:12.458 [2024-12-10 21:55:19.982436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:12.458 [2024-12-10 21:55:19.982445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:12.458 [2024-12-10 21:55:19.982454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:12.458 [2024-12-10 21:55:19.982464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:12.458 [2024-12-10 21:55:19.982474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:12.458 [2024-12-10 21:55:19.982483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:12.458 [2024-12-10 21:55:19.982492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:12.458 [2024-12-10 21:55:19.982502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:12.458 [2024-12-10 21:55:19.982511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:12.458 [2024-12-10 21:55:19.982520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:12.458 [2024-12-10 21:55:19.982529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:12.458 [2024-12-10 21:55:19.982539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:12.458 [2024-12-10 21:55:19.982550] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:12.458 [2024-12-10 21:55:19.982564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:12.458 [2024-12-10 21:55:19.982574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:12.458 [2024-12-10 21:55:19.982584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:12.458 [2024-12-10 21:55:19.982595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:12.458 [2024-12-10 21:55:19.982605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:12.458 [2024-12-10 21:55:19.982614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:12.458 [2024-12-10 21:55:19.982623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:12.458 [2024-12-10 21:55:19.982632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:12.458 [2024-12-10 21:55:19.982642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:12.458 [2024-12-10 21:55:19.982652] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:12.458 [2024-12-10 21:55:19.982664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:12.458 [2024-12-10 21:55:19.982676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:12.458 [2024-12-10 21:55:19.982686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:12.458 [2024-12-10 21:55:19.982696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:12.458 [2024-12-10 21:55:19.982707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:12.458 [2024-12-10 21:55:19.982717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:12.458 [2024-12-10 21:55:19.982728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:12.458 [2024-12-10 21:55:19.982738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:12.458 [2024-12-10 21:55:19.982747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:12.458 [2024-12-10 21:55:19.982758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:12.458 [2024-12-10 21:55:19.982768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:12.458 [2024-12-10 21:55:19.982778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:12.458 [2024-12-10 21:55:19.982788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:12.459 [2024-12-10 21:55:19.982798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:12.459 [2024-12-10 21:55:19.982809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:12.459 [2024-12-10 21:55:19.982819] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:12.459 [2024-12-10 21:55:19.982829] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:12.459 [2024-12-10 21:55:19.982846] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:12.459 [2024-12-10 21:55:19.982856] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:12.459 [2024-12-10 21:55:19.982866] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:12.459 [2024-12-10 21:55:19.982876] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:12.459 [2024-12-10 21:55:19.982887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.459 [2024-12-10 21:55:19.982898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:12.459 [2024-12-10 21:55:19.982908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.853 ms 00:24:12.459 [2024-12-10 21:55:19.982922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.459 [2024-12-10 21:55:20.026082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.459 [2024-12-10 21:55:20.026128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:12.459 [2024-12-10 21:55:20.026142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.174 ms 00:24:12.459 [2024-12-10 21:55:20.026156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.459 [2024-12-10 21:55:20.026275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.459 [2024-12-10 21:55:20.026288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:12.459 [2024-12-10 21:55:20.026299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:24:12.459 [2024-12-10 21:55:20.026309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.459 [2024-12-10 21:55:20.087213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.459 [2024-12-10 21:55:20.087257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:12.459 [2024-12-10 21:55:20.087271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.979 ms 00:24:12.459 [2024-12-10 21:55:20.087298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.459 [2024-12-10 21:55:20.087398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.459 [2024-12-10 21:55:20.087413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:12.459 [2024-12-10 21:55:20.087424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:12.459 [2024-12-10 21:55:20.087435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.459 [2024-12-10 21:55:20.087885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.459 [2024-12-10 21:55:20.087898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:12.459 [2024-12-10 21:55:20.087914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.428 ms 00:24:12.459 [2024-12-10 21:55:20.087924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.459 [2024-12-10 21:55:20.088040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.459 [2024-12-10 21:55:20.088054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:12.459 [2024-12-10 21:55:20.088064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:24:12.459 [2024-12-10 21:55:20.088251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.459 [2024-12-10 21:55:20.109977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.459 [2024-12-10 21:55:20.110139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:12.459 [2024-12-10 21:55:20.110226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.688 ms 00:24:12.459 [2024-12-10 21:55:20.110263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.459 [2024-12-10 21:55:20.130627] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:24:12.459 [2024-12-10 21:55:20.130812] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:12.459 [2024-12-10 21:55:20.130913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.459 [2024-12-10 21:55:20.130947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:12.459 [2024-12-10 21:55:20.130980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.526 ms 00:24:12.459 [2024-12-10 21:55:20.131011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.459 [2024-12-10 21:55:20.159693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.459 [2024-12-10 21:55:20.159848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:12.459 [2024-12-10 21:55:20.160006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.615 ms 00:24:12.459 [2024-12-10 21:55:20.160023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.459 [2024-12-10 21:55:20.177817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.459 [2024-12-10 21:55:20.177942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:12.459 [2024-12-10 21:55:20.177962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.715 ms 00:24:12.459 [2024-12-10 21:55:20.177989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.718 [2024-12-10 21:55:20.195354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.718 [2024-12-10 21:55:20.195392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:12.718 [2024-12-10 21:55:20.195405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.299 ms 00:24:12.718 [2024-12-10 21:55:20.195414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.718 [2024-12-10 21:55:20.196176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.718 [2024-12-10 21:55:20.196217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:12.718 [2024-12-10 21:55:20.196230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.647 ms 00:24:12.718 [2024-12-10 21:55:20.196240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.718 [2024-12-10 21:55:20.281008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.718 [2024-12-10 21:55:20.281103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:12.718 [2024-12-10 21:55:20.281122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.875 ms 00:24:12.718 [2024-12-10 21:55:20.281134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.718 [2024-12-10 21:55:20.291396] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:12.718 [2024-12-10 21:55:20.311118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.718 [2024-12-10 21:55:20.311166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:12.718 [2024-12-10 21:55:20.311189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.902 ms 00:24:12.718 [2024-12-10 21:55:20.311200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.718 [2024-12-10 21:55:20.311332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.718 [2024-12-10 21:55:20.311346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:12.718 [2024-12-10 21:55:20.311358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:12.718 [2024-12-10 21:55:20.311368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.718 [2024-12-10 21:55:20.311424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.719 [2024-12-10 21:55:20.311435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:12.719 [2024-12-10 21:55:20.311451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:24:12.719 [2024-12-10 21:55:20.311463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.719 [2024-12-10 21:55:20.311494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.719 [2024-12-10 21:55:20.311507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:12.719 [2024-12-10 21:55:20.311518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:12.719 [2024-12-10 21:55:20.311528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.719 [2024-12-10 21:55:20.311566] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:12.719 [2024-12-10 21:55:20.311579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.719 [2024-12-10 21:55:20.311589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:12.719 [2024-12-10 21:55:20.311599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:12.719 [2024-12-10 21:55:20.311608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.719 [2024-12-10 21:55:20.347488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.719 [2024-12-10 21:55:20.347531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:12.719 [2024-12-10 21:55:20.347546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.913 ms 00:24:12.719 [2024-12-10 21:55:20.347557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.719 [2024-12-10 21:55:20.347668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.719 [2024-12-10 21:55:20.347682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:12.719 [2024-12-10 21:55:20.347693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:24:12.719 [2024-12-10 21:55:20.347708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.719 [2024-12-10 21:55:20.348727] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:12.719 [2024-12-10 21:55:20.352890] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 409.078 ms, result 0 00:24:12.719 [2024-12-10 21:55:20.353863] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:12.719 [2024-12-10 21:55:20.371493] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:13.656  [2024-12-10T21:55:22.764Z] Copying: 26/256 [MB] (26 MBps) [2024-12-10T21:55:23.701Z] Copying: 49/256 [MB] (23 MBps) [2024-12-10T21:55:24.637Z] Copying: 73/256 [MB] (23 MBps) [2024-12-10T21:55:25.572Z] Copying: 96/256 [MB] (23 MBps) [2024-12-10T21:55:26.538Z] Copying: 119/256 [MB] (23 MBps) [2024-12-10T21:55:27.474Z] Copying: 142/256 [MB] (23 MBps) [2024-12-10T21:55:28.412Z] Copying: 166/256 [MB] (23 MBps) [2024-12-10T21:55:29.789Z] Copying: 188/256 [MB] (22 MBps) [2024-12-10T21:55:30.724Z] Copying: 211/256 [MB] (22 MBps) [2024-12-10T21:55:31.291Z] Copying: 235/256 [MB] (23 MBps) [2024-12-10T21:55:31.291Z] Copying: 256/256 [MB] (average 23 MBps)[2024-12-10 21:55:31.227391] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:23.560 [2024-12-10 21:55:31.242150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.560 [2024-12-10 21:55:31.242191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:23.560 [2024-12-10 21:55:31.242232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:23.560 [2024-12-10 21:55:31.242243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.560 [2024-12-10 21:55:31.242267] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:23.560 [2024-12-10 21:55:31.246484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.560 [2024-12-10 21:55:31.246514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:23.560 [2024-12-10 21:55:31.246526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.207 ms 00:24:23.560 [2024-12-10 21:55:31.246551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.560 [2024-12-10 21:55:31.246774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.560 [2024-12-10 21:55:31.246788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:23.560 [2024-12-10 21:55:31.246799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:24:23.560 [2024-12-10 21:55:31.246809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.560 [2024-12-10 21:55:31.249656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.560 [2024-12-10 21:55:31.249800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:23.560 [2024-12-10 21:55:31.249838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.827 ms 00:24:23.560 [2024-12-10 21:55:31.249850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.560 [2024-12-10 21:55:31.255241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.560 [2024-12-10 21:55:31.255271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:23.560 [2024-12-10 21:55:31.255282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.371 ms 00:24:23.560 [2024-12-10 21:55:31.255292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.820 [2024-12-10 21:55:31.289695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.820 [2024-12-10 21:55:31.289734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:23.820 [2024-12-10 21:55:31.289747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.397 ms 00:24:23.820 [2024-12-10 21:55:31.289756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.820 [2024-12-10 21:55:31.310373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.820 [2024-12-10 21:55:31.310433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:23.820 [2024-12-10 21:55:31.310447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.594 ms 00:24:23.820 [2024-12-10 21:55:31.310473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.820 [2024-12-10 21:55:31.310610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.820 [2024-12-10 21:55:31.310624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:23.820 [2024-12-10 21:55:31.310650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:24:23.820 [2024-12-10 21:55:31.310660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.820 [2024-12-10 21:55:31.345453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.820 [2024-12-10 21:55:31.345615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:23.820 [2024-12-10 21:55:31.345652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.832 ms 00:24:23.820 [2024-12-10 21:55:31.345662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.820 [2024-12-10 21:55:31.380384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.820 [2024-12-10 21:55:31.380418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:23.820 [2024-12-10 21:55:31.380430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.649 ms 00:24:23.820 [2024-12-10 21:55:31.380440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.820 [2024-12-10 21:55:31.415848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.820 [2024-12-10 21:55:31.415903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:23.820 [2024-12-10 21:55:31.415917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.410 ms 00:24:23.820 [2024-12-10 21:55:31.415927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.820 [2024-12-10 21:55:31.451685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.820 [2024-12-10 21:55:31.451724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:23.820 [2024-12-10 21:55:31.451737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.732 ms 00:24:23.820 [2024-12-10 21:55:31.451747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.820 [2024-12-10 21:55:31.451804] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:23.820 [2024-12-10 21:55:31.451821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:23.820 [2024-12-10 21:55:31.451834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:23.820 [2024-12-10 21:55:31.451845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:23.820 [2024-12-10 21:55:31.451856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:23.820 [2024-12-10 21:55:31.451867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:23.820 [2024-12-10 21:55:31.451877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:23.820 [2024-12-10 21:55:31.451888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.451898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.451910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.451920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.451931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.451941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.451952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.451962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.451973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.451983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.451994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:23.821 [2024-12-10 21:55:31.452851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:23.822 [2024-12-10 21:55:31.452861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:23.822 [2024-12-10 21:55:31.452872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:23.822 [2024-12-10 21:55:31.452883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:23.822 [2024-12-10 21:55:31.452894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:23.822 [2024-12-10 21:55:31.452912] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:23.822 [2024-12-10 21:55:31.452922] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c1c8a73d-fd96-4644-ab7c-747f40be9c54 00:24:23.822 [2024-12-10 21:55:31.452933] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:23.822 [2024-12-10 21:55:31.452944] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:23.822 [2024-12-10 21:55:31.452954] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:23.822 [2024-12-10 21:55:31.452964] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:23.822 [2024-12-10 21:55:31.452973] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:23.822 [2024-12-10 21:55:31.452993] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:23.822 [2024-12-10 21:55:31.453003] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:23.822 [2024-12-10 21:55:31.453012] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:23.822 [2024-12-10 21:55:31.453021] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:23.822 [2024-12-10 21:55:31.453031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.822 [2024-12-10 21:55:31.453041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:23.822 [2024-12-10 21:55:31.453061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.230 ms 00:24:23.822 [2024-12-10 21:55:31.453072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.822 [2024-12-10 21:55:31.474015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.822 [2024-12-10 21:55:31.474062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:23.822 [2024-12-10 21:55:31.474076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.957 ms 00:24:23.822 [2024-12-10 21:55:31.474092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.822 [2024-12-10 21:55:31.474737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.822 [2024-12-10 21:55:31.474755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:23.822 [2024-12-10 21:55:31.474766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.605 ms 00:24:23.822 [2024-12-10 21:55:31.474777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.822 [2024-12-10 21:55:31.531070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.822 [2024-12-10 21:55:31.531111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:23.822 [2024-12-10 21:55:31.531134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.822 [2024-12-10 21:55:31.531145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.822 [2024-12-10 21:55:31.531246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.822 [2024-12-10 21:55:31.531259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:23.822 [2024-12-10 21:55:31.531269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.822 [2024-12-10 21:55:31.531279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.822 [2024-12-10 21:55:31.531331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.822 [2024-12-10 21:55:31.531344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:23.822 [2024-12-10 21:55:31.531354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.822 [2024-12-10 21:55:31.531364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.822 [2024-12-10 21:55:31.531393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.822 [2024-12-10 21:55:31.531404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:23.822 [2024-12-10 21:55:31.531414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.822 [2024-12-10 21:55:31.531424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.081 [2024-12-10 21:55:31.656363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.081 [2024-12-10 21:55:31.656558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:24.081 [2024-12-10 21:55:31.656652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.081 [2024-12-10 21:55:31.656704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.081 [2024-12-10 21:55:31.752611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.081 [2024-12-10 21:55:31.752813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:24.081 [2024-12-10 21:55:31.752990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.081 [2024-12-10 21:55:31.753029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.081 [2024-12-10 21:55:31.753159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.081 [2024-12-10 21:55:31.753297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:24.081 [2024-12-10 21:55:31.753369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.081 [2024-12-10 21:55:31.753400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.081 [2024-12-10 21:55:31.753464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.081 [2024-12-10 21:55:31.753500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:24.081 [2024-12-10 21:55:31.753531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.081 [2024-12-10 21:55:31.753561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.081 [2024-12-10 21:55:31.753819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.081 [2024-12-10 21:55:31.753837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:24.081 [2024-12-10 21:55:31.753850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.081 [2024-12-10 21:55:31.753861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.081 [2024-12-10 21:55:31.753905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.081 [2024-12-10 21:55:31.753929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:24.081 [2024-12-10 21:55:31.753940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.081 [2024-12-10 21:55:31.753950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.081 [2024-12-10 21:55:31.753995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.081 [2024-12-10 21:55:31.754007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:24.081 [2024-12-10 21:55:31.754018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.081 [2024-12-10 21:55:31.754028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.082 [2024-12-10 21:55:31.754096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.082 [2024-12-10 21:55:31.754110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:24.082 [2024-12-10 21:55:31.754121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.082 [2024-12-10 21:55:31.754131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.082 [2024-12-10 21:55:31.754294] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 512.973 ms, result 0 00:24:25.460 00:24:25.460 00:24:25.460 21:55:32 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:24:25.460 21:55:32 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:24:25.719 21:55:33 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:25.719 [2024-12-10 21:55:33.332253] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:24:25.719 [2024-12-10 21:55:33.332376] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80144 ] 00:24:25.977 [2024-12-10 21:55:33.513270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.977 [2024-12-10 21:55:33.621727] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.546 [2024-12-10 21:55:33.996307] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:26.547 [2024-12-10 21:55:33.996379] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:26.547 [2024-12-10 21:55:34.158864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.547 [2024-12-10 21:55:34.159146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:26.547 [2024-12-10 21:55:34.159175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:26.547 [2024-12-10 21:55:34.159187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.547 [2024-12-10 21:55:34.162572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.547 [2024-12-10 21:55:34.162731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:26.547 [2024-12-10 21:55:34.162754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.357 ms 00:24:26.547 [2024-12-10 21:55:34.162782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.547 [2024-12-10 21:55:34.162940] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:26.547 [2024-12-10 21:55:34.163994] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:26.547 [2024-12-10 21:55:34.164029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.547 [2024-12-10 21:55:34.164042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:26.547 [2024-12-10 21:55:34.164071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.100 ms 00:24:26.547 [2024-12-10 21:55:34.164082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.547 [2024-12-10 21:55:34.166194] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:26.547 [2024-12-10 21:55:34.185324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.547 [2024-12-10 21:55:34.185362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:26.547 [2024-12-10 21:55:34.185376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.162 ms 00:24:26.547 [2024-12-10 21:55:34.185386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.547 [2024-12-10 21:55:34.185486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.547 [2024-12-10 21:55:34.185507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:26.547 [2024-12-10 21:55:34.185518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:24:26.547 [2024-12-10 21:55:34.185528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.547 [2024-12-10 21:55:34.195785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.547 [2024-12-10 21:55:34.195949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:26.547 [2024-12-10 21:55:34.195989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.233 ms 00:24:26.547 [2024-12-10 21:55:34.196000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.547 [2024-12-10 21:55:34.196146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.547 [2024-12-10 21:55:34.196162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:26.547 [2024-12-10 21:55:34.196174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:24:26.547 [2024-12-10 21:55:34.196185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.547 [2024-12-10 21:55:34.196219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.547 [2024-12-10 21:55:34.196231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:26.547 [2024-12-10 21:55:34.196242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:26.547 [2024-12-10 21:55:34.196252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.547 [2024-12-10 21:55:34.196275] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:26.547 [2024-12-10 21:55:34.201368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.547 [2024-12-10 21:55:34.201401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:26.547 [2024-12-10 21:55:34.201413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.106 ms 00:24:26.547 [2024-12-10 21:55:34.201440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.547 [2024-12-10 21:55:34.201493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.547 [2024-12-10 21:55:34.201505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:26.547 [2024-12-10 21:55:34.201516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:26.547 [2024-12-10 21:55:34.201526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.547 [2024-12-10 21:55:34.201549] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:26.547 [2024-12-10 21:55:34.201574] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:26.547 [2024-12-10 21:55:34.201610] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:26.547 [2024-12-10 21:55:34.201627] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:26.547 [2024-12-10 21:55:34.201715] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:26.547 [2024-12-10 21:55:34.201729] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:26.547 [2024-12-10 21:55:34.201742] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:26.547 [2024-12-10 21:55:34.201758] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:26.547 [2024-12-10 21:55:34.201770] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:26.547 [2024-12-10 21:55:34.201782] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:26.547 [2024-12-10 21:55:34.201792] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:26.547 [2024-12-10 21:55:34.201802] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:26.547 [2024-12-10 21:55:34.201812] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:26.547 [2024-12-10 21:55:34.201823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.547 [2024-12-10 21:55:34.201834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:26.547 [2024-12-10 21:55:34.201844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:24:26.547 [2024-12-10 21:55:34.201853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.547 [2024-12-10 21:55:34.201927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.547 [2024-12-10 21:55:34.201942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:26.547 [2024-12-10 21:55:34.201952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:24:26.547 [2024-12-10 21:55:34.201962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.547 [2024-12-10 21:55:34.202047] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:26.547 [2024-12-10 21:55:34.202059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:26.547 [2024-12-10 21:55:34.202266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:26.547 [2024-12-10 21:55:34.202301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:26.547 [2024-12-10 21:55:34.202333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:26.547 [2024-12-10 21:55:34.202362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:26.547 [2024-12-10 21:55:34.202404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:26.547 [2024-12-10 21:55:34.202434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:26.547 [2024-12-10 21:55:34.202463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:26.547 [2024-12-10 21:55:34.202544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:26.547 [2024-12-10 21:55:34.202579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:26.547 [2024-12-10 21:55:34.202622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:26.547 [2024-12-10 21:55:34.202652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:26.547 [2024-12-10 21:55:34.202681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:26.547 [2024-12-10 21:55:34.202711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:26.547 [2024-12-10 21:55:34.202739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:26.547 [2024-12-10 21:55:34.202822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:26.547 [2024-12-10 21:55:34.202856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:26.547 [2024-12-10 21:55:34.202886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:26.547 [2024-12-10 21:55:34.202915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:26.547 [2024-12-10 21:55:34.202946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:26.547 [2024-12-10 21:55:34.202975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:26.547 [2024-12-10 21:55:34.203121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:26.547 [2024-12-10 21:55:34.203134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:26.547 [2024-12-10 21:55:34.203144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:26.547 [2024-12-10 21:55:34.203153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:26.547 [2024-12-10 21:55:34.203163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:26.547 [2024-12-10 21:55:34.203172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:26.547 [2024-12-10 21:55:34.203183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:26.547 [2024-12-10 21:55:34.203192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:26.547 [2024-12-10 21:55:34.203202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:26.547 [2024-12-10 21:55:34.203211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:26.547 [2024-12-10 21:55:34.203221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:26.547 [2024-12-10 21:55:34.203230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:26.547 [2024-12-10 21:55:34.203241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:26.547 [2024-12-10 21:55:34.203250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:26.547 [2024-12-10 21:55:34.203259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:26.547 [2024-12-10 21:55:34.203268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:26.547 [2024-12-10 21:55:34.203278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:26.548 [2024-12-10 21:55:34.203287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:26.548 [2024-12-10 21:55:34.203296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:26.548 [2024-12-10 21:55:34.203305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:26.548 [2024-12-10 21:55:34.203314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:26.548 [2024-12-10 21:55:34.203323] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:26.548 [2024-12-10 21:55:34.203334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:26.548 [2024-12-10 21:55:34.203350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:26.548 [2024-12-10 21:55:34.203360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:26.548 [2024-12-10 21:55:34.203371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:26.548 [2024-12-10 21:55:34.203380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:26.548 [2024-12-10 21:55:34.203390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:26.548 [2024-12-10 21:55:34.203400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:26.548 [2024-12-10 21:55:34.203409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:26.548 [2024-12-10 21:55:34.203421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:26.548 [2024-12-10 21:55:34.203433] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:26.548 [2024-12-10 21:55:34.203447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:26.548 [2024-12-10 21:55:34.203459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:26.548 [2024-12-10 21:55:34.203470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:26.548 [2024-12-10 21:55:34.203482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:26.548 [2024-12-10 21:55:34.203493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:26.548 [2024-12-10 21:55:34.203504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:26.548 [2024-12-10 21:55:34.203515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:26.548 [2024-12-10 21:55:34.203526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:26.548 [2024-12-10 21:55:34.203536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:26.548 [2024-12-10 21:55:34.203546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:26.548 [2024-12-10 21:55:34.203557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:26.548 [2024-12-10 21:55:34.203568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:26.548 [2024-12-10 21:55:34.203579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:26.548 [2024-12-10 21:55:34.203589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:26.548 [2024-12-10 21:55:34.203600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:26.548 [2024-12-10 21:55:34.203611] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:26.548 [2024-12-10 21:55:34.203623] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:26.548 [2024-12-10 21:55:34.203634] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:26.548 [2024-12-10 21:55:34.203645] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:26.548 [2024-12-10 21:55:34.203656] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:26.548 [2024-12-10 21:55:34.203667] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:26.548 [2024-12-10 21:55:34.203679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.548 [2024-12-10 21:55:34.203695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:26.548 [2024-12-10 21:55:34.203707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.687 ms 00:24:26.548 [2024-12-10 21:55:34.203717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.548 [2024-12-10 21:55:34.245023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.548 [2024-12-10 21:55:34.245069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:26.548 [2024-12-10 21:55:34.245083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.305 ms 00:24:26.548 [2024-12-10 21:55:34.245094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.548 [2024-12-10 21:55:34.245239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.548 [2024-12-10 21:55:34.245252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:26.548 [2024-12-10 21:55:34.245264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:24:26.548 [2024-12-10 21:55:34.245275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.807 [2024-12-10 21:55:34.310276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.807 [2024-12-10 21:55:34.310315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:26.807 [2024-12-10 21:55:34.310331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.083 ms 00:24:26.807 [2024-12-10 21:55:34.310341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.807 [2024-12-10 21:55:34.310457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.807 [2024-12-10 21:55:34.310471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:26.807 [2024-12-10 21:55:34.310482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:26.807 [2024-12-10 21:55:34.310492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.807 [2024-12-10 21:55:34.310931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.807 [2024-12-10 21:55:34.310945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:26.807 [2024-12-10 21:55:34.310956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:24:26.807 [2024-12-10 21:55:34.310970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.807 [2024-12-10 21:55:34.311117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.807 [2024-12-10 21:55:34.311132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:26.807 [2024-12-10 21:55:34.311143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:24:26.807 [2024-12-10 21:55:34.311153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.807 [2024-12-10 21:55:34.331274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.807 [2024-12-10 21:55:34.331307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:26.807 [2024-12-10 21:55:34.331319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.130 ms 00:24:26.807 [2024-12-10 21:55:34.331346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.807 [2024-12-10 21:55:34.350208] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:24:26.807 [2024-12-10 21:55:34.350245] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:26.807 [2024-12-10 21:55:34.350260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.807 [2024-12-10 21:55:34.350270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:26.808 [2024-12-10 21:55:34.350281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.837 ms 00:24:26.808 [2024-12-10 21:55:34.350291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.808 [2024-12-10 21:55:34.378376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.808 [2024-12-10 21:55:34.378414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:26.808 [2024-12-10 21:55:34.378428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.037 ms 00:24:26.808 [2024-12-10 21:55:34.378439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.808 [2024-12-10 21:55:34.396156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.808 [2024-12-10 21:55:34.396194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:26.808 [2024-12-10 21:55:34.396206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.665 ms 00:24:26.808 [2024-12-10 21:55:34.396215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.808 [2024-12-10 21:55:34.413556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.808 [2024-12-10 21:55:34.413592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:26.808 [2024-12-10 21:55:34.413605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.294 ms 00:24:26.808 [2024-12-10 21:55:34.413614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.808 [2024-12-10 21:55:34.414439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.808 [2024-12-10 21:55:34.414465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:26.808 [2024-12-10 21:55:34.414478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.710 ms 00:24:26.808 [2024-12-10 21:55:34.414488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.808 [2024-12-10 21:55:34.500627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.808 [2024-12-10 21:55:34.500696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:26.808 [2024-12-10 21:55:34.500713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.246 ms 00:24:26.808 [2024-12-10 21:55:34.500724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.808 [2024-12-10 21:55:34.510947] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:26.808 [2024-12-10 21:55:34.529329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.808 [2024-12-10 21:55:34.529552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:26.808 [2024-12-10 21:55:34.529597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.556 ms 00:24:26.808 [2024-12-10 21:55:34.529616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.808 [2024-12-10 21:55:34.529743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.808 [2024-12-10 21:55:34.529758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:26.808 [2024-12-10 21:55:34.529770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:26.808 [2024-12-10 21:55:34.529781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.808 [2024-12-10 21:55:34.529846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.808 [2024-12-10 21:55:34.529858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:26.808 [2024-12-10 21:55:34.529869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:24:26.808 [2024-12-10 21:55:34.529885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.808 [2024-12-10 21:55:34.529925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.808 [2024-12-10 21:55:34.529938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:26.808 [2024-12-10 21:55:34.529949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:26.808 [2024-12-10 21:55:34.529959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.808 [2024-12-10 21:55:34.529998] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:26.808 [2024-12-10 21:55:34.530011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.808 [2024-12-10 21:55:34.530021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:26.808 [2024-12-10 21:55:34.530032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:26.808 [2024-12-10 21:55:34.530042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.067 [2024-12-10 21:55:34.567310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.067 [2024-12-10 21:55:34.567460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:27.067 [2024-12-10 21:55:34.567483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.280 ms 00:24:27.067 [2024-12-10 21:55:34.567495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.067 [2024-12-10 21:55:34.567607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.067 [2024-12-10 21:55:34.567621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:27.067 [2024-12-10 21:55:34.567632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:24:27.067 [2024-12-10 21:55:34.567643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.067 [2024-12-10 21:55:34.568574] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:27.067 [2024-12-10 21:55:34.572747] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 410.073 ms, result 0 00:24:27.067 [2024-12-10 21:55:34.573650] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:27.067 [2024-12-10 21:55:34.592373] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:27.067  [2024-12-10T21:55:34.798Z] Copying: 4096/4096 [kB] (average 21 MBps)[2024-12-10 21:55:34.780198] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:27.067 [2024-12-10 21:55:34.793950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.067 [2024-12-10 21:55:34.793989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:27.067 [2024-12-10 21:55:34.794007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:24:27.067 [2024-12-10 21:55:34.794017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.067 [2024-12-10 21:55:34.794038] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:27.328 [2024-12-10 21:55:34.797862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.328 [2024-12-10 21:55:34.798007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:27.328 [2024-12-10 21:55:34.798028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.801 ms 00:24:27.328 [2024-12-10 21:55:34.798057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.328 [2024-12-10 21:55:34.799825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.328 [2024-12-10 21:55:34.799863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:27.328 [2024-12-10 21:55:34.799877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.730 ms 00:24:27.328 [2024-12-10 21:55:34.799887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.328 [2024-12-10 21:55:34.803132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.328 [2024-12-10 21:55:34.803163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:27.328 [2024-12-10 21:55:34.803174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.227 ms 00:24:27.328 [2024-12-10 21:55:34.803185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.328 [2024-12-10 21:55:34.808607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.328 [2024-12-10 21:55:34.808732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:27.328 [2024-12-10 21:55:34.808751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.399 ms 00:24:27.328 [2024-12-10 21:55:34.808777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.328 [2024-12-10 21:55:34.843399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.328 [2024-12-10 21:55:34.843447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:27.328 [2024-12-10 21:55:34.843459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.617 ms 00:24:27.328 [2024-12-10 21:55:34.843469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.328 [2024-12-10 21:55:34.864801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.328 [2024-12-10 21:55:34.864845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:27.328 [2024-12-10 21:55:34.864858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.311 ms 00:24:27.328 [2024-12-10 21:55:34.864868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.328 [2024-12-10 21:55:34.864997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.328 [2024-12-10 21:55:34.865009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:27.328 [2024-12-10 21:55:34.865030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:24:27.328 [2024-12-10 21:55:34.865039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.328 [2024-12-10 21:55:34.900670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.328 [2024-12-10 21:55:34.900705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:27.328 [2024-12-10 21:55:34.900717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.655 ms 00:24:27.328 [2024-12-10 21:55:34.900742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.328 [2024-12-10 21:55:34.935714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.328 [2024-12-10 21:55:34.935854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:27.328 [2024-12-10 21:55:34.935875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.974 ms 00:24:27.328 [2024-12-10 21:55:34.935901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.328 [2024-12-10 21:55:34.971552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.328 [2024-12-10 21:55:34.971593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:27.328 [2024-12-10 21:55:34.971605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.630 ms 00:24:27.328 [2024-12-10 21:55:34.971616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.328 [2024-12-10 21:55:35.007007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.328 [2024-12-10 21:55:35.007101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:27.328 [2024-12-10 21:55:35.007118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.370 ms 00:24:27.328 [2024-12-10 21:55:35.007128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.328 [2024-12-10 21:55:35.007185] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:27.328 [2024-12-10 21:55:35.007204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:27.328 [2024-12-10 21:55:35.007217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:27.328 [2024-12-10 21:55:35.007228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:27.328 [2024-12-10 21:55:35.007239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:27.328 [2024-12-10 21:55:35.007251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:27.328 [2024-12-10 21:55:35.007262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:27.328 [2024-12-10 21:55:35.007273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:27.328 [2024-12-10 21:55:35.007284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:27.328 [2024-12-10 21:55:35.007295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:27.328 [2024-12-10 21:55:35.007305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:27.328 [2024-12-10 21:55:35.007316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:27.328 [2024-12-10 21:55:35.007326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:27.328 [2024-12-10 21:55:35.007336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:27.328 [2024-12-10 21:55:35.007346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:27.328 [2024-12-10 21:55:35.007357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:27.328 [2024-12-10 21:55:35.007367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:27.328 [2024-12-10 21:55:35.007377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:27.328 [2024-12-10 21:55:35.007388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:27.328 [2024-12-10 21:55:35.007398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:27.328 [2024-12-10 21:55:35.007408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.007994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.008005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.008016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.008027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.008037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.008064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.008075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.008085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.008096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.008106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.008117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.008127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.008138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.008148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.008159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.008169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.008180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.008191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.008201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.008211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.008233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.008244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.008256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.008267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.008277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.008288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.008298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:27.329 [2024-12-10 21:55:35.008316] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:27.329 [2024-12-10 21:55:35.008326] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c1c8a73d-fd96-4644-ab7c-747f40be9c54 00:24:27.329 [2024-12-10 21:55:35.008337] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:27.329 [2024-12-10 21:55:35.008348] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:27.329 [2024-12-10 21:55:35.008358] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:27.329 [2024-12-10 21:55:35.008368] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:27.329 [2024-12-10 21:55:35.008378] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:27.329 [2024-12-10 21:55:35.008388] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:27.329 [2024-12-10 21:55:35.008402] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:27.329 [2024-12-10 21:55:35.008411] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:27.329 [2024-12-10 21:55:35.008420] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:27.329 [2024-12-10 21:55:35.008430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.329 [2024-12-10 21:55:35.008440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:27.329 [2024-12-10 21:55:35.008450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.248 ms 00:24:27.330 [2024-12-10 21:55:35.008460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.330 [2024-12-10 21:55:35.028714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.330 [2024-12-10 21:55:35.028747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:27.330 [2024-12-10 21:55:35.028759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.266 ms 00:24:27.330 [2024-12-10 21:55:35.028785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.330 [2024-12-10 21:55:35.029413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.330 [2024-12-10 21:55:35.029431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:27.330 [2024-12-10 21:55:35.029442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.583 ms 00:24:27.330 [2024-12-10 21:55:35.029452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.588 [2024-12-10 21:55:35.086247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.588 [2024-12-10 21:55:35.086464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:27.588 [2024-12-10 21:55:35.086490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.588 [2024-12-10 21:55:35.086508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.588 [2024-12-10 21:55:35.086632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.588 [2024-12-10 21:55:35.086645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:27.588 [2024-12-10 21:55:35.086656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.588 [2024-12-10 21:55:35.086666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.588 [2024-12-10 21:55:35.086724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.588 [2024-12-10 21:55:35.086738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:27.589 [2024-12-10 21:55:35.086748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.589 [2024-12-10 21:55:35.086759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.589 [2024-12-10 21:55:35.086784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.589 [2024-12-10 21:55:35.086796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:27.589 [2024-12-10 21:55:35.086806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.589 [2024-12-10 21:55:35.086816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.589 [2024-12-10 21:55:35.221168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.589 [2024-12-10 21:55:35.221238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:27.589 [2024-12-10 21:55:35.221255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.589 [2024-12-10 21:55:35.221267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.847 [2024-12-10 21:55:35.321150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.847 [2024-12-10 21:55:35.321413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:27.847 [2024-12-10 21:55:35.321454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.847 [2024-12-10 21:55:35.321466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.847 [2024-12-10 21:55:35.321578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.847 [2024-12-10 21:55:35.321593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:27.847 [2024-12-10 21:55:35.321604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.847 [2024-12-10 21:55:35.321615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.847 [2024-12-10 21:55:35.321646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.847 [2024-12-10 21:55:35.321665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:27.847 [2024-12-10 21:55:35.321675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.847 [2024-12-10 21:55:35.321686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.847 [2024-12-10 21:55:35.321820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.847 [2024-12-10 21:55:35.321834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:27.847 [2024-12-10 21:55:35.321845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.847 [2024-12-10 21:55:35.321855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.847 [2024-12-10 21:55:35.321895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.848 [2024-12-10 21:55:35.321909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:27.848 [2024-12-10 21:55:35.321925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.848 [2024-12-10 21:55:35.321936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.848 [2024-12-10 21:55:35.321981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.848 [2024-12-10 21:55:35.321993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:27.848 [2024-12-10 21:55:35.322004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.848 [2024-12-10 21:55:35.322015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.848 [2024-12-10 21:55:35.322062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.848 [2024-12-10 21:55:35.322102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:27.848 [2024-12-10 21:55:35.322114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.848 [2024-12-10 21:55:35.322124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.848 [2024-12-10 21:55:35.322281] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 529.176 ms, result 0 00:24:28.785 00:24:28.785 00:24:28.785 21:55:36 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=80182 00:24:28.785 21:55:36 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:24:28.785 21:55:36 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 80182 00:24:28.785 21:55:36 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 80182 ']' 00:24:28.785 21:55:36 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.785 21:55:36 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:28.785 21:55:36 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.785 21:55:36 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:28.785 21:55:36 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:24:28.785 [2024-12-10 21:55:36.492707] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:24:28.785 [2024-12-10 21:55:36.492852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80182 ] 00:24:29.044 [2024-12-10 21:55:36.675156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.303 [2024-12-10 21:55:36.779108] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.240 21:55:37 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:30.240 21:55:37 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:24:30.240 21:55:37 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:24:30.240 [2024-12-10 21:55:37.857148] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:30.240 [2024-12-10 21:55:37.857407] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:30.501 [2024-12-10 21:55:38.047265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.501 [2024-12-10 21:55:38.047520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:30.501 [2024-12-10 21:55:38.047682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:30.501 [2024-12-10 21:55:38.047703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.501 [2024-12-10 21:55:38.051772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.501 [2024-12-10 21:55:38.051814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:30.501 [2024-12-10 21:55:38.051829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.043 ms 00:24:30.501 [2024-12-10 21:55:38.051839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.501 [2024-12-10 21:55:38.051966] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:30.501 [2024-12-10 21:55:38.052981] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:30.501 [2024-12-10 21:55:38.053018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.501 [2024-12-10 21:55:38.053030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:30.501 [2024-12-10 21:55:38.053044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.066 ms 00:24:30.501 [2024-12-10 21:55:38.053063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.501 [2024-12-10 21:55:38.054578] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:30.501 [2024-12-10 21:55:38.073546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.501 [2024-12-10 21:55:38.073605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:30.501 [2024-12-10 21:55:38.073621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.002 ms 00:24:30.501 [2024-12-10 21:55:38.073636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.501 [2024-12-10 21:55:38.073740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.501 [2024-12-10 21:55:38.073758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:30.501 [2024-12-10 21:55:38.073769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:24:30.501 [2024-12-10 21:55:38.073783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.501 [2024-12-10 21:55:38.080649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.501 [2024-12-10 21:55:38.080856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:30.501 [2024-12-10 21:55:38.080877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.822 ms 00:24:30.501 [2024-12-10 21:55:38.080894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.501 [2024-12-10 21:55:38.081038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.501 [2024-12-10 21:55:38.081078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:30.501 [2024-12-10 21:55:38.081091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:24:30.501 [2024-12-10 21:55:38.081114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.501 [2024-12-10 21:55:38.081142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.501 [2024-12-10 21:55:38.081159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:30.501 [2024-12-10 21:55:38.081170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:30.501 [2024-12-10 21:55:38.081186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.501 [2024-12-10 21:55:38.081211] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:30.501 [2024-12-10 21:55:38.086065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.501 [2024-12-10 21:55:38.086096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:30.501 [2024-12-10 21:55:38.086114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.861 ms 00:24:30.501 [2024-12-10 21:55:38.086124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.501 [2024-12-10 21:55:38.086205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.501 [2024-12-10 21:55:38.086217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:30.501 [2024-12-10 21:55:38.086233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:30.501 [2024-12-10 21:55:38.086249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.501 [2024-12-10 21:55:38.086275] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:30.501 [2024-12-10 21:55:38.086303] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:30.501 [2024-12-10 21:55:38.086356] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:30.501 [2024-12-10 21:55:38.086386] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:30.501 [2024-12-10 21:55:38.086475] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:30.501 [2024-12-10 21:55:38.086488] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:30.501 [2024-12-10 21:55:38.086511] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:30.501 [2024-12-10 21:55:38.086525] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:30.501 [2024-12-10 21:55:38.086541] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:30.501 [2024-12-10 21:55:38.086553] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:30.501 [2024-12-10 21:55:38.086567] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:30.501 [2024-12-10 21:55:38.086577] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:30.501 [2024-12-10 21:55:38.086597] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:30.501 [2024-12-10 21:55:38.086607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.501 [2024-12-10 21:55:38.086623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:30.501 [2024-12-10 21:55:38.086633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:24:30.501 [2024-12-10 21:55:38.086648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.501 [2024-12-10 21:55:38.086724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.501 [2024-12-10 21:55:38.086740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:30.501 [2024-12-10 21:55:38.086751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:24:30.501 [2024-12-10 21:55:38.086765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.501 [2024-12-10 21:55:38.086848] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:30.501 [2024-12-10 21:55:38.086865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:30.501 [2024-12-10 21:55:38.086875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:30.501 [2024-12-10 21:55:38.086891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:30.501 [2024-12-10 21:55:38.086902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:30.501 [2024-12-10 21:55:38.086918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:30.501 [2024-12-10 21:55:38.086927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:30.501 [2024-12-10 21:55:38.086948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:30.501 [2024-12-10 21:55:38.086958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:30.501 [2024-12-10 21:55:38.086972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:30.501 [2024-12-10 21:55:38.086981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:30.501 [2024-12-10 21:55:38.086995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:30.501 [2024-12-10 21:55:38.087004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:30.501 [2024-12-10 21:55:38.087018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:30.501 [2024-12-10 21:55:38.087028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:30.501 [2024-12-10 21:55:38.087042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:30.501 [2024-12-10 21:55:38.087069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:30.501 [2024-12-10 21:55:38.087084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:30.501 [2024-12-10 21:55:38.087127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:30.502 [2024-12-10 21:55:38.087142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:30.502 [2024-12-10 21:55:38.087152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:30.502 [2024-12-10 21:55:38.087182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:30.502 [2024-12-10 21:55:38.087192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:30.502 [2024-12-10 21:55:38.087211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:30.502 [2024-12-10 21:55:38.087220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:30.502 [2024-12-10 21:55:38.087235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:30.502 [2024-12-10 21:55:38.087245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:30.502 [2024-12-10 21:55:38.087260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:30.502 [2024-12-10 21:55:38.087269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:30.502 [2024-12-10 21:55:38.087285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:30.502 [2024-12-10 21:55:38.087295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:30.502 [2024-12-10 21:55:38.087309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:30.502 [2024-12-10 21:55:38.087319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:30.502 [2024-12-10 21:55:38.087334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:30.502 [2024-12-10 21:55:38.087352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:30.502 [2024-12-10 21:55:38.087367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:30.502 [2024-12-10 21:55:38.087377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:30.502 [2024-12-10 21:55:38.087391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:30.502 [2024-12-10 21:55:38.087401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:30.502 [2024-12-10 21:55:38.087420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:30.502 [2024-12-10 21:55:38.087429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:30.502 [2024-12-10 21:55:38.087444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:30.502 [2024-12-10 21:55:38.087454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:30.502 [2024-12-10 21:55:38.087468] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:30.502 [2024-12-10 21:55:38.087483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:30.502 [2024-12-10 21:55:38.087502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:30.502 [2024-12-10 21:55:38.087512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:30.502 [2024-12-10 21:55:38.087528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:30.502 [2024-12-10 21:55:38.087538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:30.502 [2024-12-10 21:55:38.087552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:30.502 [2024-12-10 21:55:38.087561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:30.502 [2024-12-10 21:55:38.087576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:30.502 [2024-12-10 21:55:38.087585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:30.502 [2024-12-10 21:55:38.087602] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:30.502 [2024-12-10 21:55:38.087615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:30.502 [2024-12-10 21:55:38.087638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:30.502 [2024-12-10 21:55:38.087648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:30.502 [2024-12-10 21:55:38.087665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:30.502 [2024-12-10 21:55:38.087675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:30.502 [2024-12-10 21:55:38.087691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:30.502 [2024-12-10 21:55:38.087702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:30.502 [2024-12-10 21:55:38.087719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:30.502 [2024-12-10 21:55:38.087730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:30.502 [2024-12-10 21:55:38.087746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:30.502 [2024-12-10 21:55:38.087757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:30.502 [2024-12-10 21:55:38.087772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:30.502 [2024-12-10 21:55:38.087783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:30.502 [2024-12-10 21:55:38.087798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:30.502 [2024-12-10 21:55:38.087809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:30.502 [2024-12-10 21:55:38.087826] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:30.502 [2024-12-10 21:55:38.087837] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:30.502 [2024-12-10 21:55:38.087858] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:30.502 [2024-12-10 21:55:38.087869] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:30.502 [2024-12-10 21:55:38.087885] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:30.502 [2024-12-10 21:55:38.087896] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:30.502 [2024-12-10 21:55:38.087912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.502 [2024-12-10 21:55:38.087924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:30.502 [2024-12-10 21:55:38.087940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.112 ms 00:24:30.502 [2024-12-10 21:55:38.087955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.502 [2024-12-10 21:55:38.128636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.502 [2024-12-10 21:55:38.128672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:30.502 [2024-12-10 21:55:38.128688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.676 ms 00:24:30.502 [2024-12-10 21:55:38.128701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.502 [2024-12-10 21:55:38.128812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.502 [2024-12-10 21:55:38.128824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:30.502 [2024-12-10 21:55:38.128839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:24:30.502 [2024-12-10 21:55:38.128849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.502 [2024-12-10 21:55:38.179006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.502 [2024-12-10 21:55:38.179265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:30.502 [2024-12-10 21:55:38.179299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.208 ms 00:24:30.502 [2024-12-10 21:55:38.179311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.502 [2024-12-10 21:55:38.179411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.502 [2024-12-10 21:55:38.179426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:30.502 [2024-12-10 21:55:38.179443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:30.502 [2024-12-10 21:55:38.179454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.502 [2024-12-10 21:55:38.179911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.502 [2024-12-10 21:55:38.179925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:30.502 [2024-12-10 21:55:38.179947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:24:30.502 [2024-12-10 21:55:38.179958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.502 [2024-12-10 21:55:38.180094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.502 [2024-12-10 21:55:38.180108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:30.502 [2024-12-10 21:55:38.180124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:24:30.502 [2024-12-10 21:55:38.180135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.502 [2024-12-10 21:55:38.202093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.502 [2024-12-10 21:55:38.202267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:30.502 [2024-12-10 21:55:38.202299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.952 ms 00:24:30.502 [2024-12-10 21:55:38.202311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.762 [2024-12-10 21:55:38.248300] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:30.762 [2024-12-10 21:55:38.248350] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:30.762 [2024-12-10 21:55:38.248377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.762 [2024-12-10 21:55:38.248393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:30.762 [2024-12-10 21:55:38.248412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.011 ms 00:24:30.762 [2024-12-10 21:55:38.248439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.762 [2024-12-10 21:55:38.276688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.762 [2024-12-10 21:55:38.276725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:30.762 [2024-12-10 21:55:38.276742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.185 ms 00:24:30.762 [2024-12-10 21:55:38.276752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.762 [2024-12-10 21:55:38.294053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.762 [2024-12-10 21:55:38.294104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:30.762 [2024-12-10 21:55:38.294125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.241 ms 00:24:30.762 [2024-12-10 21:55:38.294135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.762 [2024-12-10 21:55:38.311619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.762 [2024-12-10 21:55:38.311762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:30.762 [2024-12-10 21:55:38.311806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.425 ms 00:24:30.762 [2024-12-10 21:55:38.311817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.762 [2024-12-10 21:55:38.312608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.762 [2024-12-10 21:55:38.312634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:30.762 [2024-12-10 21:55:38.312652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.617 ms 00:24:30.762 [2024-12-10 21:55:38.312663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.762 [2024-12-10 21:55:38.394774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.762 [2024-12-10 21:55:38.394837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:30.762 [2024-12-10 21:55:38.394860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.205 ms 00:24:30.762 [2024-12-10 21:55:38.394871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.762 [2024-12-10 21:55:38.405127] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:30.762 [2024-12-10 21:55:38.420700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.762 [2024-12-10 21:55:38.420762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:30.762 [2024-12-10 21:55:38.420785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.744 ms 00:24:30.762 [2024-12-10 21:55:38.420801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.762 [2024-12-10 21:55:38.420898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.762 [2024-12-10 21:55:38.420916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:30.762 [2024-12-10 21:55:38.420928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:30.762 [2024-12-10 21:55:38.420943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.762 [2024-12-10 21:55:38.420997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.762 [2024-12-10 21:55:38.421013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:30.762 [2024-12-10 21:55:38.421024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:24:30.762 [2024-12-10 21:55:38.421043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.763 [2024-12-10 21:55:38.421108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.763 [2024-12-10 21:55:38.421126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:30.763 [2024-12-10 21:55:38.421138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:30.763 [2024-12-10 21:55:38.421152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.763 [2024-12-10 21:55:38.421197] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:30.763 [2024-12-10 21:55:38.421220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.763 [2024-12-10 21:55:38.421237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:30.763 [2024-12-10 21:55:38.421253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:30.763 [2024-12-10 21:55:38.421263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.763 [2024-12-10 21:55:38.456966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.763 [2024-12-10 21:55:38.457006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:30.763 [2024-12-10 21:55:38.457043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.721 ms 00:24:30.763 [2024-12-10 21:55:38.457054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.763 [2024-12-10 21:55:38.457185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.763 [2024-12-10 21:55:38.457199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:30.763 [2024-12-10 21:55:38.457216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:24:30.763 [2024-12-10 21:55:38.457231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.763 [2024-12-10 21:55:38.458348] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:30.763 [2024-12-10 21:55:38.462809] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 411.353 ms, result 0 00:24:30.763 [2024-12-10 21:55:38.464214] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:31.021 Some configs were skipped because the RPC state that can call them passed over. 00:24:31.021 21:55:38 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:24:31.021 [2024-12-10 21:55:38.707157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.021 [2024-12-10 21:55:38.707355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:24:31.021 [2024-12-10 21:55:38.707474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.627 ms 00:24:31.021 [2024-12-10 21:55:38.707526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.021 [2024-12-10 21:55:38.707629] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.095 ms, result 0 00:24:31.021 true 00:24:31.021 21:55:38 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:24:31.281 [2024-12-10 21:55:38.906881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.281 [2024-12-10 21:55:38.907088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:24:31.281 [2024-12-10 21:55:38.907184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.412 ms 00:24:31.281 [2024-12-10 21:55:38.907226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.281 [2024-12-10 21:55:38.907315] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.847 ms, result 0 00:24:31.281 true 00:24:31.281 21:55:38 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 80182 00:24:31.281 21:55:38 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 80182 ']' 00:24:31.281 21:55:38 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 80182 00:24:31.281 21:55:38 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:24:31.281 21:55:38 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:31.281 21:55:38 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80182 00:24:31.281 21:55:38 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:31.281 killing process with pid 80182 00:24:31.281 21:55:38 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:31.281 21:55:38 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80182' 00:24:31.281 21:55:38 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 80182 00:24:31.281 21:55:38 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 80182 00:24:32.660 [2024-12-10 21:55:40.074538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.660 [2024-12-10 21:55:40.074606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:32.660 [2024-12-10 21:55:40.074623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:32.660 [2024-12-10 21:55:40.074635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.660 [2024-12-10 21:55:40.074664] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:32.660 [2024-12-10 21:55:40.078656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.660 [2024-12-10 21:55:40.078694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:32.660 [2024-12-10 21:55:40.078713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.974 ms 00:24:32.660 [2024-12-10 21:55:40.078723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.660 [2024-12-10 21:55:40.078987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.660 [2024-12-10 21:55:40.079002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:32.660 [2024-12-10 21:55:40.079015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:24:32.660 [2024-12-10 21:55:40.079025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.660 [2024-12-10 21:55:40.082409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.660 [2024-12-10 21:55:40.082452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:32.660 [2024-12-10 21:55:40.082470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.366 ms 00:24:32.660 [2024-12-10 21:55:40.082480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.660 [2024-12-10 21:55:40.087881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.660 [2024-12-10 21:55:40.087919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:32.660 [2024-12-10 21:55:40.087933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.363 ms 00:24:32.660 [2024-12-10 21:55:40.087943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.660 [2024-12-10 21:55:40.102477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.660 [2024-12-10 21:55:40.102523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:32.660 [2024-12-10 21:55:40.102542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.484 ms 00:24:32.660 [2024-12-10 21:55:40.102551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.660 [2024-12-10 21:55:40.113183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.660 [2024-12-10 21:55:40.113417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:32.660 [2024-12-10 21:55:40.113445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.571 ms 00:24:32.660 [2024-12-10 21:55:40.113456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.660 [2024-12-10 21:55:40.113661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.660 [2024-12-10 21:55:40.113676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:32.660 [2024-12-10 21:55:40.113689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:24:32.660 [2024-12-10 21:55:40.113699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.660 [2024-12-10 21:55:40.128781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.660 [2024-12-10 21:55:40.128948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:32.660 [2024-12-10 21:55:40.128974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.083 ms 00:24:32.660 [2024-12-10 21:55:40.128984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.660 [2024-12-10 21:55:40.143297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.660 [2024-12-10 21:55:40.143462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:32.660 [2024-12-10 21:55:40.143491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.207 ms 00:24:32.660 [2024-12-10 21:55:40.143501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.660 [2024-12-10 21:55:40.157372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.660 [2024-12-10 21:55:40.157532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:32.660 [2024-12-10 21:55:40.157558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.799 ms 00:24:32.660 [2024-12-10 21:55:40.157568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.660 [2024-12-10 21:55:40.172023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.660 [2024-12-10 21:55:40.172069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:32.660 [2024-12-10 21:55:40.172085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.354 ms 00:24:32.660 [2024-12-10 21:55:40.172095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.660 [2024-12-10 21:55:40.172167] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:32.660 [2024-12-10 21:55:40.172205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:32.660 [2024-12-10 21:55:40.172221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:32.660 [2024-12-10 21:55:40.172233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:32.660 [2024-12-10 21:55:40.172247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:32.660 [2024-12-10 21:55:40.172258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:32.660 [2024-12-10 21:55:40.172275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:32.660 [2024-12-10 21:55:40.172287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:32.660 [2024-12-10 21:55:40.172301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:32.660 [2024-12-10 21:55:40.172313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:32.660 [2024-12-10 21:55:40.172327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:32.660 [2024-12-10 21:55:40.172338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:32.660 [2024-12-10 21:55:40.172352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:32.660 [2024-12-10 21:55:40.172363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:32.660 [2024-12-10 21:55:40.172377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:32.660 [2024-12-10 21:55:40.172389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:32.660 [2024-12-10 21:55:40.172403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:32.660 [2024-12-10 21:55:40.172413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:32.660 [2024-12-10 21:55:40.172429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:32.660 [2024-12-10 21:55:40.172442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:32.660 [2024-12-10 21:55:40.172456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:32.660 [2024-12-10 21:55:40.172467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:32.660 [2024-12-10 21:55:40.172483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:32.660 [2024-12-10 21:55:40.172493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:32.660 [2024-12-10 21:55:40.172507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:32.660 [2024-12-10 21:55:40.172519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:32.660 [2024-12-10 21:55:40.172532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:32.660 [2024-12-10 21:55:40.172542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:32.660 [2024-12-10 21:55:40.172555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:32.660 [2024-12-10 21:55:40.172566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:32.660 [2024-12-10 21:55:40.172580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:32.660 [2024-12-10 21:55:40.172591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:32.660 [2024-12-10 21:55:40.172604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:32.660 [2024-12-10 21:55:40.172616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:32.660 [2024-12-10 21:55:40.172629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:32.660 [2024-12-10 21:55:40.172643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:32.660 [2024-12-10 21:55:40.172657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.172669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.172685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.172696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.172711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.172723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.172737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.172748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.172764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.172775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.172788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.172799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.172812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.172823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.172836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.172847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.172860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.172871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.172888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.172899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.172912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.172923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.172937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.172949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.172963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.172976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.172989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.173000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.173015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.173026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.173040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.173061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.173076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.173088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.173106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.173117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.173130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.173141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.173155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.173165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.173179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.173190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.173203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.173215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.173228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.173239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.173253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.173265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.173279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.173290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.173310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.173332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.173346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.173356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.173370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.173381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.173395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.173406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.173420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.173431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.173444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.173455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.173470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.173481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.173494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:32.661 [2024-12-10 21:55:40.173523] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:32.661 [2024-12-10 21:55:40.173545] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c1c8a73d-fd96-4644-ab7c-747f40be9c54 00:24:32.661 [2024-12-10 21:55:40.173560] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:32.661 [2024-12-10 21:55:40.173574] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:32.661 [2024-12-10 21:55:40.173584] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:32.661 [2024-12-10 21:55:40.173596] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:32.661 [2024-12-10 21:55:40.173606] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:32.661 [2024-12-10 21:55:40.173620] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:32.661 [2024-12-10 21:55:40.173631] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:32.661 [2024-12-10 21:55:40.173642] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:32.661 [2024-12-10 21:55:40.173651] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:32.661 [2024-12-10 21:55:40.173664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.661 [2024-12-10 21:55:40.173675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:32.661 [2024-12-10 21:55:40.173689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.502 ms 00:24:32.661 [2024-12-10 21:55:40.173700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.661 [2024-12-10 21:55:40.192973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.661 [2024-12-10 21:55:40.193005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:32.661 [2024-12-10 21:55:40.193024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.273 ms 00:24:32.661 [2024-12-10 21:55:40.193034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.661 [2024-12-10 21:55:40.193670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.661 [2024-12-10 21:55:40.193689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:32.661 [2024-12-10 21:55:40.193706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:24:32.661 [2024-12-10 21:55:40.193717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.661 [2024-12-10 21:55:40.261243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:32.661 [2024-12-10 21:55:40.261297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:32.661 [2024-12-10 21:55:40.261315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:32.661 [2024-12-10 21:55:40.261326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.661 [2024-12-10 21:55:40.261425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:32.661 [2024-12-10 21:55:40.261437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:32.661 [2024-12-10 21:55:40.261455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:32.661 [2024-12-10 21:55:40.261466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.661 [2024-12-10 21:55:40.261521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:32.661 [2024-12-10 21:55:40.261534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:32.661 [2024-12-10 21:55:40.261550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:32.662 [2024-12-10 21:55:40.261560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.662 [2024-12-10 21:55:40.261584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:32.662 [2024-12-10 21:55:40.261594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:32.662 [2024-12-10 21:55:40.261608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:32.662 [2024-12-10 21:55:40.261621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.662 [2024-12-10 21:55:40.385850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:32.662 [2024-12-10 21:55:40.385919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:32.662 [2024-12-10 21:55:40.385940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:32.662 [2024-12-10 21:55:40.385951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.921 [2024-12-10 21:55:40.483691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:32.921 [2024-12-10 21:55:40.483755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:32.921 [2024-12-10 21:55:40.483773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:32.921 [2024-12-10 21:55:40.483811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.921 [2024-12-10 21:55:40.483916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:32.921 [2024-12-10 21:55:40.483929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:32.921 [2024-12-10 21:55:40.483946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:32.921 [2024-12-10 21:55:40.483956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.921 [2024-12-10 21:55:40.483989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:32.921 [2024-12-10 21:55:40.483999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:32.921 [2024-12-10 21:55:40.484012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:32.921 [2024-12-10 21:55:40.484021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.921 [2024-12-10 21:55:40.484179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:32.921 [2024-12-10 21:55:40.484195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:32.921 [2024-12-10 21:55:40.484208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:32.921 [2024-12-10 21:55:40.484217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.921 [2024-12-10 21:55:40.484262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:32.921 [2024-12-10 21:55:40.484274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:32.921 [2024-12-10 21:55:40.484288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:32.921 [2024-12-10 21:55:40.484297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.921 [2024-12-10 21:55:40.484347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:32.921 [2024-12-10 21:55:40.484359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:32.921 [2024-12-10 21:55:40.484374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:32.921 [2024-12-10 21:55:40.484384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.921 [2024-12-10 21:55:40.484433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:32.921 [2024-12-10 21:55:40.484445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:32.921 [2024-12-10 21:55:40.484458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:32.921 [2024-12-10 21:55:40.484467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.921 [2024-12-10 21:55:40.484635] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 410.710 ms, result 0 00:24:33.858 21:55:41 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:34.117 [2024-12-10 21:55:41.588720] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:24:34.117 [2024-12-10 21:55:41.589130] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80246 ] 00:24:34.117 [2024-12-10 21:55:41.771400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.376 [2024-12-10 21:55:41.882608] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.634 [2024-12-10 21:55:42.261944] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:34.634 [2024-12-10 21:55:42.262010] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:34.894 [2024-12-10 21:55:42.424471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.894 [2024-12-10 21:55:42.424520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:34.894 [2024-12-10 21:55:42.424536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:34.894 [2024-12-10 21:55:42.424546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.894 [2024-12-10 21:55:42.427756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.894 [2024-12-10 21:55:42.427798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:34.894 [2024-12-10 21:55:42.427810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.195 ms 00:24:34.894 [2024-12-10 21:55:42.427820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.894 [2024-12-10 21:55:42.427916] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:34.894 [2024-12-10 21:55:42.428948] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:34.894 [2024-12-10 21:55:42.428984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.894 [2024-12-10 21:55:42.428996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:34.894 [2024-12-10 21:55:42.429008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.079 ms 00:24:34.894 [2024-12-10 21:55:42.429018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.894 [2024-12-10 21:55:42.431161] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:34.894 [2024-12-10 21:55:42.450985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.894 [2024-12-10 21:55:42.451022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:34.894 [2024-12-10 21:55:42.451037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.856 ms 00:24:34.894 [2024-12-10 21:55:42.451046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.894 [2024-12-10 21:55:42.451177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.895 [2024-12-10 21:55:42.451193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:34.895 [2024-12-10 21:55:42.451204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:24:34.895 [2024-12-10 21:55:42.451214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.895 [2024-12-10 21:55:42.458646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.895 [2024-12-10 21:55:42.458875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:34.895 [2024-12-10 21:55:42.458897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.403 ms 00:24:34.895 [2024-12-10 21:55:42.458908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.895 [2024-12-10 21:55:42.459027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.895 [2024-12-10 21:55:42.459041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:34.895 [2024-12-10 21:55:42.459076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:24:34.895 [2024-12-10 21:55:42.459103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.895 [2024-12-10 21:55:42.459137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.895 [2024-12-10 21:55:42.459150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:34.895 [2024-12-10 21:55:42.459160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:34.895 [2024-12-10 21:55:42.459171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.895 [2024-12-10 21:55:42.459193] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:34.895 [2024-12-10 21:55:42.463857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.895 [2024-12-10 21:55:42.463887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:34.895 [2024-12-10 21:55:42.463899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.675 ms 00:24:34.895 [2024-12-10 21:55:42.463909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.895 [2024-12-10 21:55:42.463978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.895 [2024-12-10 21:55:42.463991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:34.895 [2024-12-10 21:55:42.464001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:34.895 [2024-12-10 21:55:42.464011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.895 [2024-12-10 21:55:42.464034] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:34.895 [2024-12-10 21:55:42.464073] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:34.895 [2024-12-10 21:55:42.464108] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:34.895 [2024-12-10 21:55:42.464126] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:34.895 [2024-12-10 21:55:42.464211] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:34.895 [2024-12-10 21:55:42.464224] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:34.895 [2024-12-10 21:55:42.464237] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:34.895 [2024-12-10 21:55:42.464254] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:34.895 [2024-12-10 21:55:42.464266] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:34.895 [2024-12-10 21:55:42.464278] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:34.895 [2024-12-10 21:55:42.464287] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:34.895 [2024-12-10 21:55:42.464297] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:34.895 [2024-12-10 21:55:42.464307] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:34.895 [2024-12-10 21:55:42.464318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.895 [2024-12-10 21:55:42.464329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:34.895 [2024-12-10 21:55:42.464339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:24:34.895 [2024-12-10 21:55:42.464348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.895 [2024-12-10 21:55:42.464420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.895 [2024-12-10 21:55:42.464435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:34.895 [2024-12-10 21:55:42.464445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:24:34.895 [2024-12-10 21:55:42.464455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.895 [2024-12-10 21:55:42.464536] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:34.895 [2024-12-10 21:55:42.464548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:34.895 [2024-12-10 21:55:42.464558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:34.895 [2024-12-10 21:55:42.464569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:34.895 [2024-12-10 21:55:42.464580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:34.895 [2024-12-10 21:55:42.464606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:34.895 [2024-12-10 21:55:42.464615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:34.895 [2024-12-10 21:55:42.464625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:34.895 [2024-12-10 21:55:42.464635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:34.895 [2024-12-10 21:55:42.464645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:34.895 [2024-12-10 21:55:42.464655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:34.895 [2024-12-10 21:55:42.464676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:34.895 [2024-12-10 21:55:42.464686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:34.895 [2024-12-10 21:55:42.464695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:34.895 [2024-12-10 21:55:42.464705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:34.895 [2024-12-10 21:55:42.464715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:34.895 [2024-12-10 21:55:42.464724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:34.895 [2024-12-10 21:55:42.464733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:34.895 [2024-12-10 21:55:42.464742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:34.895 [2024-12-10 21:55:42.464752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:34.895 [2024-12-10 21:55:42.464762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:34.895 [2024-12-10 21:55:42.464771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:34.895 [2024-12-10 21:55:42.464781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:34.895 [2024-12-10 21:55:42.464789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:34.895 [2024-12-10 21:55:42.464798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:34.895 [2024-12-10 21:55:42.464807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:34.895 [2024-12-10 21:55:42.464816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:34.895 [2024-12-10 21:55:42.464824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:34.895 [2024-12-10 21:55:42.464833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:34.895 [2024-12-10 21:55:42.464842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:34.895 [2024-12-10 21:55:42.464851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:34.895 [2024-12-10 21:55:42.464860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:34.895 [2024-12-10 21:55:42.464868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:34.895 [2024-12-10 21:55:42.464877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:34.895 [2024-12-10 21:55:42.464886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:34.895 [2024-12-10 21:55:42.464894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:34.895 [2024-12-10 21:55:42.464904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:34.895 [2024-12-10 21:55:42.464912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:34.895 [2024-12-10 21:55:42.464921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:34.895 [2024-12-10 21:55:42.464929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:34.895 [2024-12-10 21:55:42.464938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:34.895 [2024-12-10 21:55:42.464949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:34.895 [2024-12-10 21:55:42.464958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:34.895 [2024-12-10 21:55:42.464967] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:34.895 [2024-12-10 21:55:42.464977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:34.895 [2024-12-10 21:55:42.464991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:34.895 [2024-12-10 21:55:42.465001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:34.895 [2024-12-10 21:55:42.465011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:34.895 [2024-12-10 21:55:42.465020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:34.895 [2024-12-10 21:55:42.465030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:34.895 [2024-12-10 21:55:42.465039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:34.895 [2024-12-10 21:55:42.465048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:34.895 [2024-12-10 21:55:42.465058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:34.895 [2024-12-10 21:55:42.465080] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:34.895 [2024-12-10 21:55:42.465093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:34.895 [2024-12-10 21:55:42.465104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:34.895 [2024-12-10 21:55:42.465114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:34.895 [2024-12-10 21:55:42.465124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:34.895 [2024-12-10 21:55:42.465135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:34.895 [2024-12-10 21:55:42.465145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:34.895 [2024-12-10 21:55:42.465156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:34.896 [2024-12-10 21:55:42.465166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:34.896 [2024-12-10 21:55:42.465176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:34.896 [2024-12-10 21:55:42.465186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:34.896 [2024-12-10 21:55:42.465196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:34.896 [2024-12-10 21:55:42.465207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:34.896 [2024-12-10 21:55:42.465217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:34.896 [2024-12-10 21:55:42.465227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:34.896 [2024-12-10 21:55:42.465236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:34.896 [2024-12-10 21:55:42.465247] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:34.896 [2024-12-10 21:55:42.465258] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:34.896 [2024-12-10 21:55:42.465269] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:34.896 [2024-12-10 21:55:42.465290] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:34.896 [2024-12-10 21:55:42.465300] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:34.896 [2024-12-10 21:55:42.465326] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:34.896 [2024-12-10 21:55:42.465336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.896 [2024-12-10 21:55:42.465352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:34.896 [2024-12-10 21:55:42.465362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.853 ms 00:24:34.896 [2024-12-10 21:55:42.465372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.896 [2024-12-10 21:55:42.505790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.896 [2024-12-10 21:55:42.505825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:34.896 [2024-12-10 21:55:42.505839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.427 ms 00:24:34.896 [2024-12-10 21:55:42.505850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.896 [2024-12-10 21:55:42.505967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.896 [2024-12-10 21:55:42.505980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:34.896 [2024-12-10 21:55:42.505991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:24:34.896 [2024-12-10 21:55:42.506001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.896 [2024-12-10 21:55:42.579979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.896 [2024-12-10 21:55:42.580013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:34.896 [2024-12-10 21:55:42.580030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.076 ms 00:24:34.896 [2024-12-10 21:55:42.580040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.896 [2024-12-10 21:55:42.580145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.896 [2024-12-10 21:55:42.580159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:34.896 [2024-12-10 21:55:42.580170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:34.896 [2024-12-10 21:55:42.580180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.896 [2024-12-10 21:55:42.580633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.896 [2024-12-10 21:55:42.580652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:34.896 [2024-12-10 21:55:42.580664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.432 ms 00:24:34.896 [2024-12-10 21:55:42.580678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.896 [2024-12-10 21:55:42.580795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.896 [2024-12-10 21:55:42.580810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:34.896 [2024-12-10 21:55:42.580821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:24:34.896 [2024-12-10 21:55:42.580832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.896 [2024-12-10 21:55:42.601042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.896 [2024-12-10 21:55:42.601089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:34.896 [2024-12-10 21:55:42.601102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.221 ms 00:24:34.896 [2024-12-10 21:55:42.601113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.896 [2024-12-10 21:55:42.619376] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:34.896 [2024-12-10 21:55:42.619563] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:34.896 [2024-12-10 21:55:42.619676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.896 [2024-12-10 21:55:42.619710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:34.896 [2024-12-10 21:55:42.619742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.483 ms 00:24:34.896 [2024-12-10 21:55:42.619772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.156 [2024-12-10 21:55:42.647939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.156 [2024-12-10 21:55:42.648104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:35.156 [2024-12-10 21:55:42.648267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.113 ms 00:24:35.156 [2024-12-10 21:55:42.648306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.156 [2024-12-10 21:55:42.665498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.156 [2024-12-10 21:55:42.665629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:35.156 [2024-12-10 21:55:42.665714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.118 ms 00:24:35.156 [2024-12-10 21:55:42.665750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.156 [2024-12-10 21:55:42.682589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.156 [2024-12-10 21:55:42.682727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:35.156 [2024-12-10 21:55:42.682763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.770 ms 00:24:35.156 [2024-12-10 21:55:42.682773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.156 [2024-12-10 21:55:42.683643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.156 [2024-12-10 21:55:42.683676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:35.156 [2024-12-10 21:55:42.683689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.697 ms 00:24:35.156 [2024-12-10 21:55:42.683699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.156 [2024-12-10 21:55:42.766004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.156 [2024-12-10 21:55:42.766074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:35.156 [2024-12-10 21:55:42.766108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.406 ms 00:24:35.156 [2024-12-10 21:55:42.766120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.156 [2024-12-10 21:55:42.776008] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:35.156 [2024-12-10 21:55:42.791428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.156 [2024-12-10 21:55:42.791469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:35.156 [2024-12-10 21:55:42.791484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.237 ms 00:24:35.156 [2024-12-10 21:55:42.791500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.156 [2024-12-10 21:55:42.791615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.156 [2024-12-10 21:55:42.791629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:35.156 [2024-12-10 21:55:42.791641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:35.156 [2024-12-10 21:55:42.791650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.156 [2024-12-10 21:55:42.791706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.156 [2024-12-10 21:55:42.791718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:35.156 [2024-12-10 21:55:42.791728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:24:35.156 [2024-12-10 21:55:42.791742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.156 [2024-12-10 21:55:42.791776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.156 [2024-12-10 21:55:42.791789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:35.156 [2024-12-10 21:55:42.791799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:35.156 [2024-12-10 21:55:42.791809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.156 [2024-12-10 21:55:42.791849] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:35.156 [2024-12-10 21:55:42.791861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.156 [2024-12-10 21:55:42.791871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:35.156 [2024-12-10 21:55:42.791880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:35.156 [2024-12-10 21:55:42.791891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.156 [2024-12-10 21:55:42.827963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.156 [2024-12-10 21:55:42.828005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:35.156 [2024-12-10 21:55:42.828019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.107 ms 00:24:35.156 [2024-12-10 21:55:42.828030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.156 [2024-12-10 21:55:42.828164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.156 [2024-12-10 21:55:42.828180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:35.156 [2024-12-10 21:55:42.828192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:24:35.156 [2024-12-10 21:55:42.828202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.156 [2024-12-10 21:55:42.829165] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:35.156 [2024-12-10 21:55:42.833721] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 405.030 ms, result 0 00:24:35.156 [2024-12-10 21:55:42.834585] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:35.156 [2024-12-10 21:55:42.853247] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:36.535  [2024-12-10T21:55:45.204Z] Copying: 26/256 [MB] (26 MBps) [2024-12-10T21:55:46.140Z] Copying: 49/256 [MB] (23 MBps) [2024-12-10T21:55:47.103Z] Copying: 71/256 [MB] (22 MBps) [2024-12-10T21:55:48.080Z] Copying: 94/256 [MB] (22 MBps) [2024-12-10T21:55:49.017Z] Copying: 117/256 [MB] (22 MBps) [2024-12-10T21:55:49.953Z] Copying: 139/256 [MB] (22 MBps) [2024-12-10T21:55:51.331Z] Copying: 162/256 [MB] (22 MBps) [2024-12-10T21:55:51.899Z] Copying: 185/256 [MB] (23 MBps) [2024-12-10T21:55:53.277Z] Copying: 208/256 [MB] (23 MBps) [2024-12-10T21:55:54.214Z] Copying: 233/256 [MB] (24 MBps) [2024-12-10T21:55:54.473Z] Copying: 256/256 [MB] (average 23 MBps)[2024-12-10 21:55:54.300120] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:46.742 [2024-12-10 21:55:54.316675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.742 [2024-12-10 21:55:54.316723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:46.742 [2024-12-10 21:55:54.316747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:46.742 [2024-12-10 21:55:54.316758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.742 [2024-12-10 21:55:54.316789] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:46.742 [2024-12-10 21:55:54.321247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.742 [2024-12-10 21:55:54.321282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:46.742 [2024-12-10 21:55:54.321303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.446 ms 00:24:46.742 [2024-12-10 21:55:54.321314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.742 [2024-12-10 21:55:54.321581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.742 [2024-12-10 21:55:54.321597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:46.742 [2024-12-10 21:55:54.321610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:24:46.742 [2024-12-10 21:55:54.321620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.742 [2024-12-10 21:55:54.324738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.742 [2024-12-10 21:55:54.324766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:46.742 [2024-12-10 21:55:54.324778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.099 ms 00:24:46.742 [2024-12-10 21:55:54.324789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.742 [2024-12-10 21:55:54.330662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.742 [2024-12-10 21:55:54.330709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:46.742 [2024-12-10 21:55:54.330723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.854 ms 00:24:46.742 [2024-12-10 21:55:54.330733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.742 [2024-12-10 21:55:54.368593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.742 [2024-12-10 21:55:54.368637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:46.742 [2024-12-10 21:55:54.368650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.820 ms 00:24:46.742 [2024-12-10 21:55:54.368660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.742 [2024-12-10 21:55:54.389272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.742 [2024-12-10 21:55:54.389445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:46.742 [2024-12-10 21:55:54.389492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.575 ms 00:24:46.742 [2024-12-10 21:55:54.389504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.742 [2024-12-10 21:55:54.389676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.742 [2024-12-10 21:55:54.389692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:46.742 [2024-12-10 21:55:54.389723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:24:46.742 [2024-12-10 21:55:54.389735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.742 [2024-12-10 21:55:54.424605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.742 [2024-12-10 21:55:54.424641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:46.742 [2024-12-10 21:55:54.424654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.906 ms 00:24:46.742 [2024-12-10 21:55:54.424664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.742 [2024-12-10 21:55:54.458831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.742 [2024-12-10 21:55:54.458878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:46.742 [2024-12-10 21:55:54.458891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.163 ms 00:24:46.742 [2024-12-10 21:55:54.458916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.002 [2024-12-10 21:55:54.493222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.002 [2024-12-10 21:55:54.493361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:47.002 [2024-12-10 21:55:54.493380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.302 ms 00:24:47.002 [2024-12-10 21:55:54.493406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.002 [2024-12-10 21:55:54.527110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.002 [2024-12-10 21:55:54.527146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:47.002 [2024-12-10 21:55:54.527159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.670 ms 00:24:47.002 [2024-12-10 21:55:54.527184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.002 [2024-12-10 21:55:54.527242] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:47.002 [2024-12-10 21:55:54.527270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:47.002 [2024-12-10 21:55:54.527732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.527742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.527751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.527761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.527772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.527783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.527794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.527804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.527814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.527824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.527833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.527844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.527856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.527866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.527877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.527887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.527897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.527907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.527917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.527927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.527938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.527949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.527959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.527970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.527980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.527990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.528000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.528011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.528021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.528031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.528042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.528068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.528079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.528090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.528118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.528129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.528140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.528151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.528162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.528173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.528183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.528194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.528205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.528216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.528228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.528238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.528249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.528259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.528274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.528285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.528307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.528318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.528330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.528341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.528352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.528363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.528373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:47.003 [2024-12-10 21:55:54.528391] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:47.003 [2024-12-10 21:55:54.528402] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c1c8a73d-fd96-4644-ab7c-747f40be9c54 00:24:47.003 [2024-12-10 21:55:54.528413] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:47.003 [2024-12-10 21:55:54.528423] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:47.003 [2024-12-10 21:55:54.528433] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:47.003 [2024-12-10 21:55:54.528443] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:47.003 [2024-12-10 21:55:54.528453] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:47.003 [2024-12-10 21:55:54.528464] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:47.003 [2024-12-10 21:55:54.528477] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:47.003 [2024-12-10 21:55:54.528487] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:47.003 [2024-12-10 21:55:54.528495] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:47.003 [2024-12-10 21:55:54.528505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.003 [2024-12-10 21:55:54.528516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:47.003 [2024-12-10 21:55:54.528527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.267 ms 00:24:47.003 [2024-12-10 21:55:54.528553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.003 [2024-12-10 21:55:54.547474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.003 [2024-12-10 21:55:54.547645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:47.003 [2024-12-10 21:55:54.547671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.928 ms 00:24:47.003 [2024-12-10 21:55:54.547681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.003 [2024-12-10 21:55:54.548226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.003 [2024-12-10 21:55:54.548242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:47.003 [2024-12-10 21:55:54.548253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.494 ms 00:24:47.003 [2024-12-10 21:55:54.548263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.003 [2024-12-10 21:55:54.600387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.003 [2024-12-10 21:55:54.600424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:47.003 [2024-12-10 21:55:54.600437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.003 [2024-12-10 21:55:54.600452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.003 [2024-12-10 21:55:54.600525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.003 [2024-12-10 21:55:54.600537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:47.003 [2024-12-10 21:55:54.600547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.003 [2024-12-10 21:55:54.600557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.003 [2024-12-10 21:55:54.600607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.003 [2024-12-10 21:55:54.600619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:47.003 [2024-12-10 21:55:54.600629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.003 [2024-12-10 21:55:54.600639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.003 [2024-12-10 21:55:54.600661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.003 [2024-12-10 21:55:54.600671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:47.003 [2024-12-10 21:55:54.600681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.003 [2024-12-10 21:55:54.600691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.003 [2024-12-10 21:55:54.717184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.003 [2024-12-10 21:55:54.717408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:47.003 [2024-12-10 21:55:54.717448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.003 [2024-12-10 21:55:54.717460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.262 [2024-12-10 21:55:54.811672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.262 [2024-12-10 21:55:54.811720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:47.262 [2024-12-10 21:55:54.811734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.262 [2024-12-10 21:55:54.811744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.262 [2024-12-10 21:55:54.811807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.262 [2024-12-10 21:55:54.811829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:47.263 [2024-12-10 21:55:54.811839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.263 [2024-12-10 21:55:54.811850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.263 [2024-12-10 21:55:54.811878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.263 [2024-12-10 21:55:54.811896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:47.263 [2024-12-10 21:55:54.811906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.263 [2024-12-10 21:55:54.811916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.263 [2024-12-10 21:55:54.812032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.263 [2024-12-10 21:55:54.812045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:47.263 [2024-12-10 21:55:54.812078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.263 [2024-12-10 21:55:54.812104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.263 [2024-12-10 21:55:54.812162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.263 [2024-12-10 21:55:54.812177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:47.263 [2024-12-10 21:55:54.812193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.263 [2024-12-10 21:55:54.812203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.263 [2024-12-10 21:55:54.812249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.263 [2024-12-10 21:55:54.812260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:47.263 [2024-12-10 21:55:54.812271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.263 [2024-12-10 21:55:54.812281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.263 [2024-12-10 21:55:54.812327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.263 [2024-12-10 21:55:54.812343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:47.263 [2024-12-10 21:55:54.812353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.263 [2024-12-10 21:55:54.812364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.263 [2024-12-10 21:55:54.812530] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 496.685 ms, result 0 00:24:48.203 00:24:48.203 00:24:48.203 21:55:55 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:48.775 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:24:48.775 21:55:56 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:24:48.775 21:55:56 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:24:48.775 21:55:56 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:48.775 21:55:56 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:48.775 21:55:56 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:24:48.775 21:55:56 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:24:48.775 21:55:56 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 80182 00:24:48.775 21:55:56 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 80182 ']' 00:24:48.775 21:55:56 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 80182 00:24:48.775 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80182) - No such process 00:24:48.775 21:55:56 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 80182 is not found' 00:24:48.775 Process with pid 80182 is not found 00:24:48.775 ************************************ 00:24:48.775 END TEST ftl_trim 00:24:48.775 ************************************ 00:24:48.775 00:24:48.775 real 1m13.149s 00:24:48.775 user 1m39.318s 00:24:48.775 sys 0m7.136s 00:24:48.775 21:55:56 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:48.775 21:55:56 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:24:48.775 21:55:56 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:24:48.775 21:55:56 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:48.775 21:55:56 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:48.775 21:55:56 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:49.035 ************************************ 00:24:49.035 START TEST ftl_restore 00:24:49.035 ************************************ 00:24:49.035 21:55:56 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:24:49.035 * Looking for test storage... 00:24:49.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:49.035 21:55:56 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:49.035 21:55:56 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:24:49.035 21:55:56 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:49.035 21:55:56 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:49.035 21:55:56 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:49.035 21:55:56 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:49.035 21:55:56 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:49.035 21:55:56 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:24:49.035 21:55:56 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:24:49.035 21:55:56 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:24:49.035 21:55:56 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:24:49.035 21:55:56 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:24:49.035 21:55:56 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:24:49.035 21:55:56 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:24:49.035 21:55:56 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:49.035 21:55:56 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:24:49.035 21:55:56 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:24:49.035 21:55:56 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:49.035 21:55:56 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:49.035 21:55:56 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:24:49.035 21:55:56 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:24:49.035 21:55:56 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:49.035 21:55:56 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:24:49.035 21:55:56 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:24:49.035 21:55:56 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:24:49.035 21:55:56 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:24:49.035 21:55:56 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:49.035 21:55:56 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:24:49.035 21:55:56 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:24:49.035 21:55:56 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:49.035 21:55:56 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:49.035 21:55:56 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:24:49.035 21:55:56 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:49.035 21:55:56 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:49.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.035 --rc genhtml_branch_coverage=1 00:24:49.035 --rc genhtml_function_coverage=1 00:24:49.035 --rc genhtml_legend=1 00:24:49.035 --rc geninfo_all_blocks=1 00:24:49.035 --rc geninfo_unexecuted_blocks=1 00:24:49.035 00:24:49.035 ' 00:24:49.035 21:55:56 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:49.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.035 --rc genhtml_branch_coverage=1 00:24:49.035 --rc genhtml_function_coverage=1 00:24:49.035 --rc genhtml_legend=1 00:24:49.035 --rc geninfo_all_blocks=1 00:24:49.035 --rc geninfo_unexecuted_blocks=1 00:24:49.035 00:24:49.035 ' 00:24:49.035 21:55:56 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:49.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.035 --rc genhtml_branch_coverage=1 00:24:49.035 --rc genhtml_function_coverage=1 00:24:49.035 --rc genhtml_legend=1 00:24:49.035 --rc geninfo_all_blocks=1 00:24:49.035 --rc geninfo_unexecuted_blocks=1 00:24:49.035 00:24:49.035 ' 00:24:49.035 21:55:56 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:49.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.035 --rc genhtml_branch_coverage=1 00:24:49.035 --rc genhtml_function_coverage=1 00:24:49.035 --rc genhtml_legend=1 00:24:49.035 --rc geninfo_all_blocks=1 00:24:49.035 --rc geninfo_unexecuted_blocks=1 00:24:49.035 00:24:49.035 ' 00:24:49.035 21:55:56 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:49.035 21:55:56 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:24:49.035 21:55:56 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:49.294 21:55:56 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:49.294 21:55:56 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:49.294 21:55:56 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:49.294 21:55:56 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:49.294 21:55:56 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:49.294 21:55:56 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:49.294 21:55:56 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:49.294 21:55:56 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:49.294 21:55:56 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:49.294 21:55:56 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:49.294 21:55:56 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:49.294 21:55:56 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:49.294 21:55:56 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:49.294 21:55:56 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:49.294 21:55:56 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:49.294 21:55:56 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:49.294 21:55:56 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:49.294 21:55:56 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:49.294 21:55:56 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:49.294 21:55:56 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:49.294 21:55:56 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:49.294 21:55:56 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:49.294 21:55:56 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:49.294 21:55:56 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:49.295 21:55:56 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:49.295 21:55:56 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:49.295 21:55:56 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:49.295 21:55:56 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:24:49.295 21:55:56 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.R8ZOT4B6gh 00:24:49.295 21:55:56 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:24:49.295 21:55:56 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:24:49.295 21:55:56 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:24:49.295 21:55:56 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:24:49.295 21:55:56 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:24:49.295 21:55:56 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:24:49.295 21:55:56 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:24:49.295 21:55:56 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:24:49.295 21:55:56 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=80469 00:24:49.295 21:55:56 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 80469 00:24:49.295 21:55:56 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:49.295 21:55:56 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 80469 ']' 00:24:49.295 21:55:56 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:49.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:49.295 21:55:56 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:49.295 21:55:56 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:49.295 21:55:56 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:49.295 21:55:56 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:24:49.295 [2024-12-10 21:55:56.913822] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:24:49.295 [2024-12-10 21:55:56.913948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80469 ] 00:24:49.553 [2024-12-10 21:55:57.096333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.553 [2024-12-10 21:55:57.208311] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.489 21:55:58 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:50.489 21:55:58 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:24:50.489 21:55:58 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:50.489 21:55:58 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:24:50.489 21:55:58 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:50.489 21:55:58 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:24:50.489 21:55:58 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:24:50.489 21:55:58 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:50.748 21:55:58 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:50.748 21:55:58 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:24:50.748 21:55:58 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:50.748 21:55:58 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:24:50.748 21:55:58 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:50.748 21:55:58 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:24:50.748 21:55:58 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:24:50.748 21:55:58 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:51.007 21:55:58 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:51.007 { 00:24:51.007 "name": "nvme0n1", 00:24:51.007 "aliases": [ 00:24:51.007 "07bcc5a1-8c92-48a3-b5e4-2f39ae2ed4d3" 00:24:51.007 ], 00:24:51.007 "product_name": "NVMe disk", 00:24:51.007 "block_size": 4096, 00:24:51.007 "num_blocks": 1310720, 00:24:51.007 "uuid": "07bcc5a1-8c92-48a3-b5e4-2f39ae2ed4d3", 00:24:51.007 "numa_id": -1, 00:24:51.007 "assigned_rate_limits": { 00:24:51.007 "rw_ios_per_sec": 0, 00:24:51.007 "rw_mbytes_per_sec": 0, 00:24:51.007 "r_mbytes_per_sec": 0, 00:24:51.007 "w_mbytes_per_sec": 0 00:24:51.007 }, 00:24:51.007 "claimed": true, 00:24:51.007 "claim_type": "read_many_write_one", 00:24:51.007 "zoned": false, 00:24:51.007 "supported_io_types": { 00:24:51.007 "read": true, 00:24:51.007 "write": true, 00:24:51.007 "unmap": true, 00:24:51.007 "flush": true, 00:24:51.007 "reset": true, 00:24:51.007 "nvme_admin": true, 00:24:51.007 "nvme_io": true, 00:24:51.007 "nvme_io_md": false, 00:24:51.007 "write_zeroes": true, 00:24:51.007 "zcopy": false, 00:24:51.007 "get_zone_info": false, 00:24:51.007 "zone_management": false, 00:24:51.007 "zone_append": false, 00:24:51.007 "compare": true, 00:24:51.007 "compare_and_write": false, 00:24:51.007 "abort": true, 00:24:51.007 "seek_hole": false, 00:24:51.007 "seek_data": false, 00:24:51.007 "copy": true, 00:24:51.007 "nvme_iov_md": false 00:24:51.007 }, 00:24:51.007 "driver_specific": { 00:24:51.007 "nvme": [ 00:24:51.007 { 00:24:51.007 "pci_address": "0000:00:11.0", 00:24:51.007 "trid": { 00:24:51.007 "trtype": "PCIe", 00:24:51.007 "traddr": "0000:00:11.0" 00:24:51.007 }, 00:24:51.007 "ctrlr_data": { 00:24:51.007 "cntlid": 0, 00:24:51.007 "vendor_id": "0x1b36", 00:24:51.007 "model_number": "QEMU NVMe Ctrl", 00:24:51.007 "serial_number": "12341", 00:24:51.007 "firmware_revision": "8.0.0", 00:24:51.007 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:51.007 "oacs": { 00:24:51.007 "security": 0, 00:24:51.007 "format": 1, 00:24:51.007 "firmware": 0, 00:24:51.007 "ns_manage": 1 00:24:51.007 }, 00:24:51.007 "multi_ctrlr": false, 00:24:51.007 "ana_reporting": false 00:24:51.007 }, 00:24:51.007 "vs": { 00:24:51.007 "nvme_version": "1.4" 00:24:51.007 }, 00:24:51.007 "ns_data": { 00:24:51.007 "id": 1, 00:24:51.007 "can_share": false 00:24:51.007 } 00:24:51.007 } 00:24:51.007 ], 00:24:51.007 "mp_policy": "active_passive" 00:24:51.007 } 00:24:51.007 } 00:24:51.007 ]' 00:24:51.007 21:55:58 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:51.007 21:55:58 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:24:51.007 21:55:58 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:51.007 21:55:58 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:24:51.007 21:55:58 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:24:51.007 21:55:58 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:24:51.007 21:55:58 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:24:51.007 21:55:58 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:51.007 21:55:58 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:24:51.007 21:55:58 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:51.007 21:55:58 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:51.266 21:55:58 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=bb063b47-4024-469d-a5e1-452510ec0189 00:24:51.266 21:55:58 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:24:51.266 21:55:58 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bb063b47-4024-469d-a5e1-452510ec0189 00:24:51.524 21:55:59 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:51.783 21:55:59 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=032a69f6-c166-44ec-ac35-28deb46342de 00:24:51.783 21:55:59 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 032a69f6-c166-44ec-ac35-28deb46342de 00:24:51.783 21:55:59 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=f457eef9-fd12-43f7-924a-d5bce791c451 00:24:51.783 21:55:59 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:24:51.784 21:55:59 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 f457eef9-fd12-43f7-924a-d5bce791c451 00:24:51.784 21:55:59 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:24:51.784 21:55:59 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:51.784 21:55:59 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=f457eef9-fd12-43f7-924a-d5bce791c451 00:24:51.784 21:55:59 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:24:51.784 21:55:59 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size f457eef9-fd12-43f7-924a-d5bce791c451 00:24:51.784 21:55:59 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=f457eef9-fd12-43f7-924a-d5bce791c451 00:24:51.784 21:55:59 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:51.784 21:55:59 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:24:51.784 21:55:59 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:24:51.784 21:55:59 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f457eef9-fd12-43f7-924a-d5bce791c451 00:24:52.042 21:55:59 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:52.042 { 00:24:52.042 "name": "f457eef9-fd12-43f7-924a-d5bce791c451", 00:24:52.042 "aliases": [ 00:24:52.042 "lvs/nvme0n1p0" 00:24:52.042 ], 00:24:52.042 "product_name": "Logical Volume", 00:24:52.042 "block_size": 4096, 00:24:52.042 "num_blocks": 26476544, 00:24:52.042 "uuid": "f457eef9-fd12-43f7-924a-d5bce791c451", 00:24:52.042 "assigned_rate_limits": { 00:24:52.042 "rw_ios_per_sec": 0, 00:24:52.042 "rw_mbytes_per_sec": 0, 00:24:52.042 "r_mbytes_per_sec": 0, 00:24:52.042 "w_mbytes_per_sec": 0 00:24:52.042 }, 00:24:52.042 "claimed": false, 00:24:52.042 "zoned": false, 00:24:52.042 "supported_io_types": { 00:24:52.043 "read": true, 00:24:52.043 "write": true, 00:24:52.043 "unmap": true, 00:24:52.043 "flush": false, 00:24:52.043 "reset": true, 00:24:52.043 "nvme_admin": false, 00:24:52.043 "nvme_io": false, 00:24:52.043 "nvme_io_md": false, 00:24:52.043 "write_zeroes": true, 00:24:52.043 "zcopy": false, 00:24:52.043 "get_zone_info": false, 00:24:52.043 "zone_management": false, 00:24:52.043 "zone_append": false, 00:24:52.043 "compare": false, 00:24:52.043 "compare_and_write": false, 00:24:52.043 "abort": false, 00:24:52.043 "seek_hole": true, 00:24:52.043 "seek_data": true, 00:24:52.043 "copy": false, 00:24:52.043 "nvme_iov_md": false 00:24:52.043 }, 00:24:52.043 "driver_specific": { 00:24:52.043 "lvol": { 00:24:52.043 "lvol_store_uuid": "032a69f6-c166-44ec-ac35-28deb46342de", 00:24:52.043 "base_bdev": "nvme0n1", 00:24:52.043 "thin_provision": true, 00:24:52.043 "num_allocated_clusters": 0, 00:24:52.043 "snapshot": false, 00:24:52.043 "clone": false, 00:24:52.043 "esnap_clone": false 00:24:52.043 } 00:24:52.043 } 00:24:52.043 } 00:24:52.043 ]' 00:24:52.043 21:55:59 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:52.043 21:55:59 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:24:52.043 21:55:59 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:52.301 21:55:59 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:52.301 21:55:59 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:52.301 21:55:59 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:24:52.301 21:55:59 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:24:52.301 21:55:59 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:24:52.301 21:55:59 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:52.560 21:56:00 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:52.560 21:56:00 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:52.560 21:56:00 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size f457eef9-fd12-43f7-924a-d5bce791c451 00:24:52.560 21:56:00 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=f457eef9-fd12-43f7-924a-d5bce791c451 00:24:52.560 21:56:00 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:52.560 21:56:00 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:24:52.560 21:56:00 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:24:52.560 21:56:00 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f457eef9-fd12-43f7-924a-d5bce791c451 00:24:52.560 21:56:00 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:52.560 { 00:24:52.560 "name": "f457eef9-fd12-43f7-924a-d5bce791c451", 00:24:52.560 "aliases": [ 00:24:52.560 "lvs/nvme0n1p0" 00:24:52.560 ], 00:24:52.560 "product_name": "Logical Volume", 00:24:52.560 "block_size": 4096, 00:24:52.560 "num_blocks": 26476544, 00:24:52.560 "uuid": "f457eef9-fd12-43f7-924a-d5bce791c451", 00:24:52.560 "assigned_rate_limits": { 00:24:52.560 "rw_ios_per_sec": 0, 00:24:52.560 "rw_mbytes_per_sec": 0, 00:24:52.560 "r_mbytes_per_sec": 0, 00:24:52.560 "w_mbytes_per_sec": 0 00:24:52.560 }, 00:24:52.560 "claimed": false, 00:24:52.560 "zoned": false, 00:24:52.560 "supported_io_types": { 00:24:52.560 "read": true, 00:24:52.560 "write": true, 00:24:52.560 "unmap": true, 00:24:52.560 "flush": false, 00:24:52.560 "reset": true, 00:24:52.560 "nvme_admin": false, 00:24:52.560 "nvme_io": false, 00:24:52.560 "nvme_io_md": false, 00:24:52.560 "write_zeroes": true, 00:24:52.560 "zcopy": false, 00:24:52.560 "get_zone_info": false, 00:24:52.560 "zone_management": false, 00:24:52.560 "zone_append": false, 00:24:52.560 "compare": false, 00:24:52.560 "compare_and_write": false, 00:24:52.560 "abort": false, 00:24:52.560 "seek_hole": true, 00:24:52.560 "seek_data": true, 00:24:52.560 "copy": false, 00:24:52.560 "nvme_iov_md": false 00:24:52.560 }, 00:24:52.560 "driver_specific": { 00:24:52.560 "lvol": { 00:24:52.560 "lvol_store_uuid": "032a69f6-c166-44ec-ac35-28deb46342de", 00:24:52.560 "base_bdev": "nvme0n1", 00:24:52.560 "thin_provision": true, 00:24:52.560 "num_allocated_clusters": 0, 00:24:52.560 "snapshot": false, 00:24:52.560 "clone": false, 00:24:52.560 "esnap_clone": false 00:24:52.560 } 00:24:52.560 } 00:24:52.560 } 00:24:52.560 ]' 00:24:52.560 21:56:00 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:52.819 21:56:00 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:24:52.819 21:56:00 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:52.819 21:56:00 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:52.819 21:56:00 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:52.819 21:56:00 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:24:52.819 21:56:00 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:24:52.819 21:56:00 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:53.078 21:56:00 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:24:53.078 21:56:00 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size f457eef9-fd12-43f7-924a-d5bce791c451 00:24:53.078 21:56:00 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=f457eef9-fd12-43f7-924a-d5bce791c451 00:24:53.078 21:56:00 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:53.078 21:56:00 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:24:53.078 21:56:00 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:24:53.078 21:56:00 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f457eef9-fd12-43f7-924a-d5bce791c451 00:24:53.078 21:56:00 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:53.078 { 00:24:53.078 "name": "f457eef9-fd12-43f7-924a-d5bce791c451", 00:24:53.078 "aliases": [ 00:24:53.078 "lvs/nvme0n1p0" 00:24:53.078 ], 00:24:53.078 "product_name": "Logical Volume", 00:24:53.078 "block_size": 4096, 00:24:53.078 "num_blocks": 26476544, 00:24:53.078 "uuid": "f457eef9-fd12-43f7-924a-d5bce791c451", 00:24:53.078 "assigned_rate_limits": { 00:24:53.078 "rw_ios_per_sec": 0, 00:24:53.078 "rw_mbytes_per_sec": 0, 00:24:53.078 "r_mbytes_per_sec": 0, 00:24:53.078 "w_mbytes_per_sec": 0 00:24:53.078 }, 00:24:53.078 "claimed": false, 00:24:53.078 "zoned": false, 00:24:53.078 "supported_io_types": { 00:24:53.078 "read": true, 00:24:53.078 "write": true, 00:24:53.078 "unmap": true, 00:24:53.078 "flush": false, 00:24:53.078 "reset": true, 00:24:53.078 "nvme_admin": false, 00:24:53.078 "nvme_io": false, 00:24:53.078 "nvme_io_md": false, 00:24:53.078 "write_zeroes": true, 00:24:53.078 "zcopy": false, 00:24:53.078 "get_zone_info": false, 00:24:53.078 "zone_management": false, 00:24:53.078 "zone_append": false, 00:24:53.078 "compare": false, 00:24:53.078 "compare_and_write": false, 00:24:53.078 "abort": false, 00:24:53.078 "seek_hole": true, 00:24:53.078 "seek_data": true, 00:24:53.078 "copy": false, 00:24:53.078 "nvme_iov_md": false 00:24:53.078 }, 00:24:53.078 "driver_specific": { 00:24:53.078 "lvol": { 00:24:53.078 "lvol_store_uuid": "032a69f6-c166-44ec-ac35-28deb46342de", 00:24:53.079 "base_bdev": "nvme0n1", 00:24:53.079 "thin_provision": true, 00:24:53.079 "num_allocated_clusters": 0, 00:24:53.079 "snapshot": false, 00:24:53.079 "clone": false, 00:24:53.079 "esnap_clone": false 00:24:53.079 } 00:24:53.079 } 00:24:53.079 } 00:24:53.079 ]' 00:24:53.079 21:56:00 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:53.079 21:56:00 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:24:53.079 21:56:00 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:53.339 21:56:00 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:53.339 21:56:00 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:53.339 21:56:00 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:24:53.339 21:56:00 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:24:53.339 21:56:00 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d f457eef9-fd12-43f7-924a-d5bce791c451 --l2p_dram_limit 10' 00:24:53.339 21:56:00 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:24:53.339 21:56:00 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:24:53.339 21:56:00 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:24:53.339 21:56:00 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:24:53.339 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:24:53.339 21:56:00 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d f457eef9-fd12-43f7-924a-d5bce791c451 --l2p_dram_limit 10 -c nvc0n1p0 00:24:53.339 [2024-12-10 21:56:01.026845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.339 [2024-12-10 21:56:01.026898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:53.339 [2024-12-10 21:56:01.026918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:53.339 [2024-12-10 21:56:01.026946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.339 [2024-12-10 21:56:01.027008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.339 [2024-12-10 21:56:01.027021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:53.339 [2024-12-10 21:56:01.027035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:24:53.339 [2024-12-10 21:56:01.027046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.339 [2024-12-10 21:56:01.027235] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:53.339 [2024-12-10 21:56:01.028404] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:53.339 [2024-12-10 21:56:01.028537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.339 [2024-12-10 21:56:01.028555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:53.339 [2024-12-10 21:56:01.028573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.468 ms 00:24:53.339 [2024-12-10 21:56:01.028584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.339 [2024-12-10 21:56:01.028737] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 94ce3722-1699-4a2c-9d85-909c122158e6 00:24:53.339 [2024-12-10 21:56:01.031146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.339 [2024-12-10 21:56:01.031186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:53.339 [2024-12-10 21:56:01.031200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:24:53.339 [2024-12-10 21:56:01.031215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.339 [2024-12-10 21:56:01.043796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.339 [2024-12-10 21:56:01.043834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:53.339 [2024-12-10 21:56:01.043847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.526 ms 00:24:53.339 [2024-12-10 21:56:01.043860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.339 [2024-12-10 21:56:01.043953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.339 [2024-12-10 21:56:01.043969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:53.339 [2024-12-10 21:56:01.043980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:24:53.339 [2024-12-10 21:56:01.043998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.339 [2024-12-10 21:56:01.044077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.339 [2024-12-10 21:56:01.044111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:53.339 [2024-12-10 21:56:01.044123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:24:53.339 [2024-12-10 21:56:01.044140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.339 [2024-12-10 21:56:01.044165] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:53.339 [2024-12-10 21:56:01.049356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.339 [2024-12-10 21:56:01.049388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:53.339 [2024-12-10 21:56:01.049405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.204 ms 00:24:53.339 [2024-12-10 21:56:01.049415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.339 [2024-12-10 21:56:01.049461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.339 [2024-12-10 21:56:01.049472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:53.339 [2024-12-10 21:56:01.049487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:53.339 [2024-12-10 21:56:01.049497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.339 [2024-12-10 21:56:01.049535] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:53.339 [2024-12-10 21:56:01.049682] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:53.339 [2024-12-10 21:56:01.049706] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:53.339 [2024-12-10 21:56:01.049721] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:53.339 [2024-12-10 21:56:01.049737] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:53.339 [2024-12-10 21:56:01.049749] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:53.339 [2024-12-10 21:56:01.049763] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:53.339 [2024-12-10 21:56:01.049774] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:53.339 [2024-12-10 21:56:01.049791] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:53.339 [2024-12-10 21:56:01.049801] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:53.339 [2024-12-10 21:56:01.049815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.339 [2024-12-10 21:56:01.049834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:53.339 [2024-12-10 21:56:01.049851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:24:53.339 [2024-12-10 21:56:01.049862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.339 [2024-12-10 21:56:01.049941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.339 [2024-12-10 21:56:01.049952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:53.339 [2024-12-10 21:56:01.049967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:24:53.340 [2024-12-10 21:56:01.049978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.340 [2024-12-10 21:56:01.050079] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:53.340 [2024-12-10 21:56:01.050091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:53.340 [2024-12-10 21:56:01.050104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:53.340 [2024-12-10 21:56:01.050115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:53.340 [2024-12-10 21:56:01.050128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:53.340 [2024-12-10 21:56:01.050137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:53.340 [2024-12-10 21:56:01.050149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:53.340 [2024-12-10 21:56:01.050158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:53.340 [2024-12-10 21:56:01.050169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:53.340 [2024-12-10 21:56:01.050179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:53.340 [2024-12-10 21:56:01.050190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:53.340 [2024-12-10 21:56:01.050200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:53.340 [2024-12-10 21:56:01.050213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:53.340 [2024-12-10 21:56:01.050224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:53.340 [2024-12-10 21:56:01.050236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:53.340 [2024-12-10 21:56:01.050245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:53.340 [2024-12-10 21:56:01.050258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:53.340 [2024-12-10 21:56:01.050267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:53.340 [2024-12-10 21:56:01.050280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:53.340 [2024-12-10 21:56:01.050289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:53.340 [2024-12-10 21:56:01.050301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:53.340 [2024-12-10 21:56:01.050310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:53.340 [2024-12-10 21:56:01.050322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:53.340 [2024-12-10 21:56:01.050331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:53.340 [2024-12-10 21:56:01.050344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:53.340 [2024-12-10 21:56:01.050353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:53.340 [2024-12-10 21:56:01.050364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:53.340 [2024-12-10 21:56:01.050373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:53.340 [2024-12-10 21:56:01.050393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:53.340 [2024-12-10 21:56:01.050419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:53.340 [2024-12-10 21:56:01.050432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:53.340 [2024-12-10 21:56:01.050441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:53.340 [2024-12-10 21:56:01.050456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:53.340 [2024-12-10 21:56:01.050466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:53.340 [2024-12-10 21:56:01.050478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:53.340 [2024-12-10 21:56:01.050488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:53.340 [2024-12-10 21:56:01.050500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:53.340 [2024-12-10 21:56:01.050509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:53.340 [2024-12-10 21:56:01.050523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:53.340 [2024-12-10 21:56:01.050532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:53.340 [2024-12-10 21:56:01.050544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:53.340 [2024-12-10 21:56:01.050554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:53.340 [2024-12-10 21:56:01.050565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:53.340 [2024-12-10 21:56:01.050574] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:53.340 [2024-12-10 21:56:01.050587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:53.340 [2024-12-10 21:56:01.050598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:53.340 [2024-12-10 21:56:01.050612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:53.340 [2024-12-10 21:56:01.050624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:53.340 [2024-12-10 21:56:01.050639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:53.340 [2024-12-10 21:56:01.050648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:53.340 [2024-12-10 21:56:01.050661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:53.340 [2024-12-10 21:56:01.050671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:53.340 [2024-12-10 21:56:01.050683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:53.340 [2024-12-10 21:56:01.050694] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:53.340 [2024-12-10 21:56:01.050710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:53.340 [2024-12-10 21:56:01.050726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:53.340 [2024-12-10 21:56:01.050740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:53.340 [2024-12-10 21:56:01.050751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:53.340 [2024-12-10 21:56:01.050764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:53.340 [2024-12-10 21:56:01.050775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:53.340 [2024-12-10 21:56:01.050788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:53.340 [2024-12-10 21:56:01.050799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:53.340 [2024-12-10 21:56:01.050812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:53.340 [2024-12-10 21:56:01.050823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:53.340 [2024-12-10 21:56:01.050841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:53.340 [2024-12-10 21:56:01.050851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:53.340 [2024-12-10 21:56:01.050865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:53.340 [2024-12-10 21:56:01.050875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:53.340 [2024-12-10 21:56:01.050888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:53.340 [2024-12-10 21:56:01.050898] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:53.340 [2024-12-10 21:56:01.050912] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:53.340 [2024-12-10 21:56:01.050923] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:53.340 [2024-12-10 21:56:01.050936] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:53.340 [2024-12-10 21:56:01.050947] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:53.340 [2024-12-10 21:56:01.050961] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:53.340 [2024-12-10 21:56:01.050972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.340 [2024-12-10 21:56:01.050984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:53.340 [2024-12-10 21:56:01.050996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.965 ms 00:24:53.340 [2024-12-10 21:56:01.051009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.340 [2024-12-10 21:56:01.051051] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:53.340 [2024-12-10 21:56:01.051335] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:57.536 [2024-12-10 21:56:04.701283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.536 [2024-12-10 21:56:04.701602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:57.536 [2024-12-10 21:56:04.701694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3656.156 ms 00:24:57.536 [2024-12-10 21:56:04.701737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.536 [2024-12-10 21:56:04.741688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.536 [2024-12-10 21:56:04.741944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:57.536 [2024-12-10 21:56:04.742153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.577 ms 00:24:57.536 [2024-12-10 21:56:04.742203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.536 [2024-12-10 21:56:04.742357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.536 [2024-12-10 21:56:04.742544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:57.536 [2024-12-10 21:56:04.742789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:24:57.536 [2024-12-10 21:56:04.742837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.536 [2024-12-10 21:56:04.792599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.536 [2024-12-10 21:56:04.792804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:57.536 [2024-12-10 21:56:04.792919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.770 ms 00:24:57.536 [2024-12-10 21:56:04.792964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.536 [2024-12-10 21:56:04.793024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.536 [2024-12-10 21:56:04.793190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:57.536 [2024-12-10 21:56:04.793208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:57.536 [2024-12-10 21:56:04.793233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.536 [2024-12-10 21:56:04.794065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.536 [2024-12-10 21:56:04.794090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:57.536 [2024-12-10 21:56:04.794103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.754 ms 00:24:57.536 [2024-12-10 21:56:04.794116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.536 [2024-12-10 21:56:04.794218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.536 [2024-12-10 21:56:04.794233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:57.536 [2024-12-10 21:56:04.794247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:24:57.537 [2024-12-10 21:56:04.794264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.537 [2024-12-10 21:56:04.817309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.537 [2024-12-10 21:56:04.817355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:57.537 [2024-12-10 21:56:04.817370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.053 ms 00:24:57.537 [2024-12-10 21:56:04.817400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.537 [2024-12-10 21:56:04.857285] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:57.537 [2024-12-10 21:56:04.862108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.537 [2024-12-10 21:56:04.862146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:57.537 [2024-12-10 21:56:04.862168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.691 ms 00:24:57.537 [2024-12-10 21:56:04.862183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.537 [2024-12-10 21:56:04.966073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.537 [2024-12-10 21:56:04.966135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:57.537 [2024-12-10 21:56:04.966157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.007 ms 00:24:57.537 [2024-12-10 21:56:04.966168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.537 [2024-12-10 21:56:04.966389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.537 [2024-12-10 21:56:04.966410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:57.537 [2024-12-10 21:56:04.966428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:24:57.537 [2024-12-10 21:56:04.966438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.537 [2024-12-10 21:56:05.001739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.537 [2024-12-10 21:56:05.001778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:57.537 [2024-12-10 21:56:05.001797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.300 ms 00:24:57.537 [2024-12-10 21:56:05.001808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.537 [2024-12-10 21:56:05.035977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.537 [2024-12-10 21:56:05.036188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:57.537 [2024-12-10 21:56:05.036219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.173 ms 00:24:57.537 [2024-12-10 21:56:05.036230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.537 [2024-12-10 21:56:05.037047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.537 [2024-12-10 21:56:05.037084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:57.537 [2024-12-10 21:56:05.037101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.737 ms 00:24:57.537 [2024-12-10 21:56:05.037115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.537 [2024-12-10 21:56:05.138157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.537 [2024-12-10 21:56:05.138200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:57.537 [2024-12-10 21:56:05.138222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.142 ms 00:24:57.537 [2024-12-10 21:56:05.138233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.537 [2024-12-10 21:56:05.173760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.537 [2024-12-10 21:56:05.173798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:57.537 [2024-12-10 21:56:05.173815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.499 ms 00:24:57.537 [2024-12-10 21:56:05.173826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.537 [2024-12-10 21:56:05.207945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.537 [2024-12-10 21:56:05.207984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:57.537 [2024-12-10 21:56:05.208000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.128 ms 00:24:57.537 [2024-12-10 21:56:05.208026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.537 [2024-12-10 21:56:05.243581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.537 [2024-12-10 21:56:05.243619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:57.537 [2024-12-10 21:56:05.243636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.554 ms 00:24:57.537 [2024-12-10 21:56:05.243647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.537 [2024-12-10 21:56:05.243694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.537 [2024-12-10 21:56:05.243706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:57.537 [2024-12-10 21:56:05.243723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:57.537 [2024-12-10 21:56:05.243733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.537 [2024-12-10 21:56:05.243842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.537 [2024-12-10 21:56:05.243859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:57.537 [2024-12-10 21:56:05.243872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:24:57.537 [2024-12-10 21:56:05.243882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.537 [2024-12-10 21:56:05.245166] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4224.728 ms, result 0 00:24:57.537 { 00:24:57.537 "name": "ftl0", 00:24:57.537 "uuid": "94ce3722-1699-4a2c-9d85-909c122158e6" 00:24:57.537 } 00:24:57.796 21:56:05 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:24:57.796 21:56:05 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:57.796 21:56:05 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:24:57.796 21:56:05 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:58.055 [2024-12-10 21:56:05.671591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.055 [2024-12-10 21:56:05.671657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:58.055 [2024-12-10 21:56:05.671675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:58.055 [2024-12-10 21:56:05.671688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.055 [2024-12-10 21:56:05.671716] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:58.055 [2024-12-10 21:56:05.675885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.055 [2024-12-10 21:56:05.675921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:58.055 [2024-12-10 21:56:05.675937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.152 ms 00:24:58.055 [2024-12-10 21:56:05.675948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.055 [2024-12-10 21:56:05.676260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.055 [2024-12-10 21:56:05.676288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:58.055 [2024-12-10 21:56:05.676304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:24:58.055 [2024-12-10 21:56:05.676315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.055 [2024-12-10 21:56:05.678822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.055 [2024-12-10 21:56:05.678846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:58.055 [2024-12-10 21:56:05.678860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.490 ms 00:24:58.056 [2024-12-10 21:56:05.678871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.056 [2024-12-10 21:56:05.683812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.056 [2024-12-10 21:56:05.683977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:58.056 [2024-12-10 21:56:05.684027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.924 ms 00:24:58.056 [2024-12-10 21:56:05.684039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.056 [2024-12-10 21:56:05.719580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.056 [2024-12-10 21:56:05.719621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:58.056 [2024-12-10 21:56:05.719638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.515 ms 00:24:58.056 [2024-12-10 21:56:05.719665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.056 [2024-12-10 21:56:05.741765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.056 [2024-12-10 21:56:05.741818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:58.056 [2024-12-10 21:56:05.741836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.083 ms 00:24:58.056 [2024-12-10 21:56:05.741847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.056 [2024-12-10 21:56:05.742012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.056 [2024-12-10 21:56:05.742026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:58.056 [2024-12-10 21:56:05.742039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:24:58.056 [2024-12-10 21:56:05.742069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.056 [2024-12-10 21:56:05.776780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.056 [2024-12-10 21:56:05.776818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:58.056 [2024-12-10 21:56:05.776834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.724 ms 00:24:58.056 [2024-12-10 21:56:05.776844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.316 [2024-12-10 21:56:05.811154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.316 [2024-12-10 21:56:05.811314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:58.316 [2024-12-10 21:56:05.811357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.319 ms 00:24:58.316 [2024-12-10 21:56:05.811367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.316 [2024-12-10 21:56:05.846443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.316 [2024-12-10 21:56:05.846483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:58.316 [2024-12-10 21:56:05.846499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.055 ms 00:24:58.316 [2024-12-10 21:56:05.846509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.316 [2024-12-10 21:56:05.880647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.316 [2024-12-10 21:56:05.880685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:58.316 [2024-12-10 21:56:05.880701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.081 ms 00:24:58.316 [2024-12-10 21:56:05.880728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.316 [2024-12-10 21:56:05.880771] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:58.316 [2024-12-10 21:56:05.880790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:58.316 [2024-12-10 21:56:05.880811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:58.316 [2024-12-10 21:56:05.880822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:58.316 [2024-12-10 21:56:05.880837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:58.316 [2024-12-10 21:56:05.880848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:58.316 [2024-12-10 21:56:05.880861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:58.316 [2024-12-10 21:56:05.880872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:58.316 [2024-12-10 21:56:05.880890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:58.316 [2024-12-10 21:56:05.880901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:58.316 [2024-12-10 21:56:05.880915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:58.316 [2024-12-10 21:56:05.880925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:58.316 [2024-12-10 21:56:05.880939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:58.316 [2024-12-10 21:56:05.880951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:58.316 [2024-12-10 21:56:05.880965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:58.316 [2024-12-10 21:56:05.880975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:58.316 [2024-12-10 21:56:05.880989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:58.316 [2024-12-10 21:56:05.880999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:58.316 [2024-12-10 21:56:05.881013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:58.316 [2024-12-10 21:56:05.881024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:58.316 [2024-12-10 21:56:05.881039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:58.316 [2024-12-10 21:56:05.881063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:58.316 [2024-12-10 21:56:05.881078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:58.316 [2024-12-10 21:56:05.881089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:58.316 [2024-12-10 21:56:05.881105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:58.316 [2024-12-10 21:56:05.881116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:58.316 [2024-12-10 21:56:05.881130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:58.316 [2024-12-10 21:56:05.881142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:58.316 [2024-12-10 21:56:05.881155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:58.316 [2024-12-10 21:56:05.881166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:58.316 [2024-12-10 21:56:05.881181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:58.316 [2024-12-10 21:56:05.881192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:58.316 [2024-12-10 21:56:05.881206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:58.316 [2024-12-10 21:56:05.881217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:58.316 [2024-12-10 21:56:05.881230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.881991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.882002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.882015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.882026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.882040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.882051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.882067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.882086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.882100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:58.317 [2024-12-10 21:56:05.882120] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:58.317 [2024-12-10 21:56:05.882132] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 94ce3722-1699-4a2c-9d85-909c122158e6 00:24:58.317 [2024-12-10 21:56:05.882144] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:58.317 [2024-12-10 21:56:05.882160] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:58.317 [2024-12-10 21:56:05.882175] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:58.317 [2024-12-10 21:56:05.882189] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:58.317 [2024-12-10 21:56:05.882199] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:58.317 [2024-12-10 21:56:05.882213] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:58.317 [2024-12-10 21:56:05.882223] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:58.317 [2024-12-10 21:56:05.882236] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:58.317 [2024-12-10 21:56:05.882245] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:58.317 [2024-12-10 21:56:05.882258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.317 [2024-12-10 21:56:05.882269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:58.317 [2024-12-10 21:56:05.882283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.491 ms 00:24:58.317 [2024-12-10 21:56:05.882296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.317 [2024-12-10 21:56:05.902211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.317 [2024-12-10 21:56:05.902247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:58.317 [2024-12-10 21:56:05.902263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.888 ms 00:24:58.317 [2024-12-10 21:56:05.902273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.317 [2024-12-10 21:56:05.902877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.317 [2024-12-10 21:56:05.902893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:58.317 [2024-12-10 21:56:05.902911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.576 ms 00:24:58.317 [2024-12-10 21:56:05.902921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.317 [2024-12-10 21:56:05.966758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.317 [2024-12-10 21:56:05.966946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:58.317 [2024-12-10 21:56:05.966973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.317 [2024-12-10 21:56:05.966985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.317 [2024-12-10 21:56:05.967046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.317 [2024-12-10 21:56:05.967072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:58.317 [2024-12-10 21:56:05.967091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.317 [2024-12-10 21:56:05.967101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.317 [2024-12-10 21:56:05.967214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.318 [2024-12-10 21:56:05.967228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:58.318 [2024-12-10 21:56:05.967242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.318 [2024-12-10 21:56:05.967254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.318 [2024-12-10 21:56:05.967280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.318 [2024-12-10 21:56:05.967292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:58.318 [2024-12-10 21:56:05.967305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.318 [2024-12-10 21:56:05.967319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.577 [2024-12-10 21:56:06.088218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.577 [2024-12-10 21:56:06.088273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:58.577 [2024-12-10 21:56:06.088310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.577 [2024-12-10 21:56:06.088321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.577 [2024-12-10 21:56:06.184255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.577 [2024-12-10 21:56:06.184306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:58.577 [2024-12-10 21:56:06.184324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.577 [2024-12-10 21:56:06.184340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.577 [2024-12-10 21:56:06.184460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.577 [2024-12-10 21:56:06.184472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:58.577 [2024-12-10 21:56:06.184486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.577 [2024-12-10 21:56:06.184496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.577 [2024-12-10 21:56:06.184553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.577 [2024-12-10 21:56:06.184565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:58.577 [2024-12-10 21:56:06.184578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.577 [2024-12-10 21:56:06.184588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.577 [2024-12-10 21:56:06.184704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.577 [2024-12-10 21:56:06.184717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:58.577 [2024-12-10 21:56:06.184730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.577 [2024-12-10 21:56:06.184740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.577 [2024-12-10 21:56:06.184785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.577 [2024-12-10 21:56:06.184797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:58.577 [2024-12-10 21:56:06.184810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.577 [2024-12-10 21:56:06.184820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.577 [2024-12-10 21:56:06.184869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.577 [2024-12-10 21:56:06.184880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:58.577 [2024-12-10 21:56:06.184893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.577 [2024-12-10 21:56:06.184903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.577 [2024-12-10 21:56:06.184956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.577 [2024-12-10 21:56:06.184968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:58.577 [2024-12-10 21:56:06.184981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.577 [2024-12-10 21:56:06.184991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.577 [2024-12-10 21:56:06.185175] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 514.359 ms, result 0 00:24:58.577 true 00:24:58.577 21:56:06 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 80469 00:24:58.577 21:56:06 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 80469 ']' 00:24:58.577 21:56:06 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 80469 00:24:58.577 21:56:06 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:24:58.577 21:56:06 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:58.577 21:56:06 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80469 00:24:58.577 killing process with pid 80469 00:24:58.577 21:56:06 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:58.577 21:56:06 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:58.577 21:56:06 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80469' 00:24:58.577 21:56:06 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 80469 00:24:58.577 21:56:06 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 80469 00:25:03.865 21:56:11 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:25:08.122 262144+0 records in 00:25:08.122 262144+0 records out 00:25:08.122 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.01333 s, 268 MB/s 00:25:08.122 21:56:15 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:09.058 21:56:16 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:09.316 [2024-12-10 21:56:16.806709] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:25:09.316 [2024-12-10 21:56:16.807027] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80705 ] 00:25:09.316 [2024-12-10 21:56:16.991779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.575 [2024-12-10 21:56:17.114450] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.834 [2024-12-10 21:56:17.485483] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:09.834 [2024-12-10 21:56:17.485551] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:10.095 [2024-12-10 21:56:17.654368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.095 [2024-12-10 21:56:17.654646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:10.095 [2024-12-10 21:56:17.654673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:10.095 [2024-12-10 21:56:17.654685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.095 [2024-12-10 21:56:17.654746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.095 [2024-12-10 21:56:17.654763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:10.095 [2024-12-10 21:56:17.654775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:25:10.095 [2024-12-10 21:56:17.654786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.095 [2024-12-10 21:56:17.654810] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:10.096 [2024-12-10 21:56:17.655879] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:10.096 [2024-12-10 21:56:17.655908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.096 [2024-12-10 21:56:17.655920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:10.096 [2024-12-10 21:56:17.655931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.105 ms 00:25:10.096 [2024-12-10 21:56:17.655941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.096 [2024-12-10 21:56:17.657421] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:10.096 [2024-12-10 21:56:17.676204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.096 [2024-12-10 21:56:17.676379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:10.096 [2024-12-10 21:56:17.676403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.814 ms 00:25:10.096 [2024-12-10 21:56:17.676414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.096 [2024-12-10 21:56:17.676506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.096 [2024-12-10 21:56:17.676520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:10.096 [2024-12-10 21:56:17.676532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:25:10.096 [2024-12-10 21:56:17.676543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.096 [2024-12-10 21:56:17.684916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.096 [2024-12-10 21:56:17.684945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:10.096 [2024-12-10 21:56:17.684957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.310 ms 00:25:10.096 [2024-12-10 21:56:17.684971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.096 [2024-12-10 21:56:17.685068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.096 [2024-12-10 21:56:17.685081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:10.096 [2024-12-10 21:56:17.685092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:25:10.096 [2024-12-10 21:56:17.685101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.096 [2024-12-10 21:56:17.685139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.096 [2024-12-10 21:56:17.685151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:10.096 [2024-12-10 21:56:17.685161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:10.096 [2024-12-10 21:56:17.685171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.096 [2024-12-10 21:56:17.685198] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:10.096 [2024-12-10 21:56:17.690928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.096 [2024-12-10 21:56:17.690960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:10.096 [2024-12-10 21:56:17.690977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.745 ms 00:25:10.096 [2024-12-10 21:56:17.691003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.096 [2024-12-10 21:56:17.691038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.096 [2024-12-10 21:56:17.691049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:10.096 [2024-12-10 21:56:17.691060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:10.096 [2024-12-10 21:56:17.691081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.096 [2024-12-10 21:56:17.691132] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:10.096 [2024-12-10 21:56:17.691160] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:10.096 [2024-12-10 21:56:17.691202] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:10.096 [2024-12-10 21:56:17.691227] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:10.096 [2024-12-10 21:56:17.691317] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:10.096 [2024-12-10 21:56:17.691331] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:10.096 [2024-12-10 21:56:17.691345] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:10.096 [2024-12-10 21:56:17.691358] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:10.096 [2024-12-10 21:56:17.691371] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:10.096 [2024-12-10 21:56:17.691382] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:10.096 [2024-12-10 21:56:17.691393] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:10.096 [2024-12-10 21:56:17.691403] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:10.096 [2024-12-10 21:56:17.691417] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:10.096 [2024-12-10 21:56:17.691427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.096 [2024-12-10 21:56:17.691438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:10.096 [2024-12-10 21:56:17.691459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:25:10.096 [2024-12-10 21:56:17.691469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.096 [2024-12-10 21:56:17.691538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.096 [2024-12-10 21:56:17.691549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:10.096 [2024-12-10 21:56:17.691559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:25:10.096 [2024-12-10 21:56:17.691568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.096 [2024-12-10 21:56:17.691650] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:10.096 [2024-12-10 21:56:17.691663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:10.096 [2024-12-10 21:56:17.691674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:10.096 [2024-12-10 21:56:17.691683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:10.096 [2024-12-10 21:56:17.691693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:10.096 [2024-12-10 21:56:17.691702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:10.096 [2024-12-10 21:56:17.691712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:10.096 [2024-12-10 21:56:17.691722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:10.096 [2024-12-10 21:56:17.691731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:10.096 [2024-12-10 21:56:17.691740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:10.096 [2024-12-10 21:56:17.691751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:10.096 [2024-12-10 21:56:17.691760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:10.096 [2024-12-10 21:56:17.691768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:10.096 [2024-12-10 21:56:17.691787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:10.096 [2024-12-10 21:56:17.691796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:10.096 [2024-12-10 21:56:17.691805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:10.096 [2024-12-10 21:56:17.691815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:10.096 [2024-12-10 21:56:17.691823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:10.096 [2024-12-10 21:56:17.691832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:10.096 [2024-12-10 21:56:17.691842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:10.096 [2024-12-10 21:56:17.691851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:10.096 [2024-12-10 21:56:17.691860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:10.096 [2024-12-10 21:56:17.691869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:10.096 [2024-12-10 21:56:17.691877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:10.096 [2024-12-10 21:56:17.691886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:10.096 [2024-12-10 21:56:17.691895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:10.096 [2024-12-10 21:56:17.691904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:10.096 [2024-12-10 21:56:17.691912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:10.096 [2024-12-10 21:56:17.691921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:10.096 [2024-12-10 21:56:17.691930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:10.096 [2024-12-10 21:56:17.691938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:10.096 [2024-12-10 21:56:17.691946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:10.096 [2024-12-10 21:56:17.691955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:10.096 [2024-12-10 21:56:17.691964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:10.096 [2024-12-10 21:56:17.691973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:10.096 [2024-12-10 21:56:17.691982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:10.096 [2024-12-10 21:56:17.691990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:10.096 [2024-12-10 21:56:17.691999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:10.096 [2024-12-10 21:56:17.692008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:10.096 [2024-12-10 21:56:17.692016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:10.097 [2024-12-10 21:56:17.692025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:10.097 [2024-12-10 21:56:17.692033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:10.097 [2024-12-10 21:56:17.692044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:10.097 [2024-12-10 21:56:17.692053] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:10.097 [2024-12-10 21:56:17.692323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:10.097 [2024-12-10 21:56:17.692360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:10.097 [2024-12-10 21:56:17.692391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:10.097 [2024-12-10 21:56:17.692422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:10.097 [2024-12-10 21:56:17.692453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:10.097 [2024-12-10 21:56:17.692483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:10.097 [2024-12-10 21:56:17.692513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:10.097 [2024-12-10 21:56:17.692598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:10.097 [2024-12-10 21:56:17.692633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:10.097 [2024-12-10 21:56:17.692667] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:10.097 [2024-12-10 21:56:17.692718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:10.097 [2024-12-10 21:56:17.692824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:10.097 [2024-12-10 21:56:17.692878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:10.097 [2024-12-10 21:56:17.692926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:10.097 [2024-12-10 21:56:17.692974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:10.097 [2024-12-10 21:56:17.693079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:10.097 [2024-12-10 21:56:17.693281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:10.097 [2024-12-10 21:56:17.693332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:10.097 [2024-12-10 21:56:17.693380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:10.097 [2024-12-10 21:56:17.693429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:10.097 [2024-12-10 21:56:17.693521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:10.097 [2024-12-10 21:56:17.693574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:10.097 [2024-12-10 21:56:17.693622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:10.097 [2024-12-10 21:56:17.693635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:10.097 [2024-12-10 21:56:17.693646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:10.097 [2024-12-10 21:56:17.693658] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:10.097 [2024-12-10 21:56:17.693670] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:10.097 [2024-12-10 21:56:17.693681] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:10.097 [2024-12-10 21:56:17.693692] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:10.097 [2024-12-10 21:56:17.693703] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:10.097 [2024-12-10 21:56:17.693715] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:10.097 [2024-12-10 21:56:17.693729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.097 [2024-12-10 21:56:17.693740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:10.097 [2024-12-10 21:56:17.693752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.133 ms 00:25:10.097 [2024-12-10 21:56:17.693762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.097 [2024-12-10 21:56:17.734135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.097 [2024-12-10 21:56:17.734174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:10.097 [2024-12-10 21:56:17.734188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.372 ms 00:25:10.097 [2024-12-10 21:56:17.734223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.097 [2024-12-10 21:56:17.734301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.097 [2024-12-10 21:56:17.734313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:10.097 [2024-12-10 21:56:17.734324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:25:10.097 [2024-12-10 21:56:17.734334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.097 [2024-12-10 21:56:17.810910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.097 [2024-12-10 21:56:17.810952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:10.097 [2024-12-10 21:56:17.810966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.629 ms 00:25:10.097 [2024-12-10 21:56:17.810992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.097 [2024-12-10 21:56:17.811034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.097 [2024-12-10 21:56:17.811046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:10.097 [2024-12-10 21:56:17.811078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:10.097 [2024-12-10 21:56:17.811089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.097 [2024-12-10 21:56:17.811626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.097 [2024-12-10 21:56:17.811647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:10.097 [2024-12-10 21:56:17.811659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:25:10.097 [2024-12-10 21:56:17.811670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.097 [2024-12-10 21:56:17.811801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.097 [2024-12-10 21:56:17.811816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:10.097 [2024-12-10 21:56:17.811833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:25:10.097 [2024-12-10 21:56:17.811844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.357 [2024-12-10 21:56:17.831547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.357 [2024-12-10 21:56:17.831585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:10.357 [2024-12-10 21:56:17.831599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.714 ms 00:25:10.357 [2024-12-10 21:56:17.831609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.357 [2024-12-10 21:56:17.849777] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:25:10.357 [2024-12-10 21:56:17.849820] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:10.357 [2024-12-10 21:56:17.849837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.357 [2024-12-10 21:56:17.849848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:10.357 [2024-12-10 21:56:17.849859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.152 ms 00:25:10.357 [2024-12-10 21:56:17.849868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.357 [2024-12-10 21:56:17.878010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.357 [2024-12-10 21:56:17.878213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:10.357 [2024-12-10 21:56:17.878236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.140 ms 00:25:10.357 [2024-12-10 21:56:17.878248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.357 [2024-12-10 21:56:17.895169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.357 [2024-12-10 21:56:17.895204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:10.357 [2024-12-10 21:56:17.895217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.838 ms 00:25:10.357 [2024-12-10 21:56:17.895227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.357 [2024-12-10 21:56:17.912380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.357 [2024-12-10 21:56:17.912417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:10.357 [2024-12-10 21:56:17.912430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.135 ms 00:25:10.357 [2024-12-10 21:56:17.912439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.357 [2024-12-10 21:56:17.913216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.357 [2024-12-10 21:56:17.913240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:10.358 [2024-12-10 21:56:17.913252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.659 ms 00:25:10.358 [2024-12-10 21:56:17.913270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.358 [2024-12-10 21:56:17.998082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.358 [2024-12-10 21:56:17.998322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:10.358 [2024-12-10 21:56:17.998368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.925 ms 00:25:10.358 [2024-12-10 21:56:17.998397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.358 [2024-12-10 21:56:18.008757] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:10.358 [2024-12-10 21:56:18.011136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.358 [2024-12-10 21:56:18.011176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:10.358 [2024-12-10 21:56:18.011190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.599 ms 00:25:10.358 [2024-12-10 21:56:18.011216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.358 [2024-12-10 21:56:18.011294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.358 [2024-12-10 21:56:18.011309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:10.358 [2024-12-10 21:56:18.011321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:10.358 [2024-12-10 21:56:18.011332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.358 [2024-12-10 21:56:18.011420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.358 [2024-12-10 21:56:18.011433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:10.358 [2024-12-10 21:56:18.011445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:25:10.358 [2024-12-10 21:56:18.011455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.358 [2024-12-10 21:56:18.011488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.358 [2024-12-10 21:56:18.011499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:10.358 [2024-12-10 21:56:18.011510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:10.358 [2024-12-10 21:56:18.011519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.358 [2024-12-10 21:56:18.011559] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:10.358 [2024-12-10 21:56:18.011575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.358 [2024-12-10 21:56:18.011585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:10.358 [2024-12-10 21:56:18.011594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:10.358 [2024-12-10 21:56:18.011604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.358 [2024-12-10 21:56:18.046726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.358 [2024-12-10 21:56:18.046888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:10.358 [2024-12-10 21:56:18.046964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.159 ms 00:25:10.358 [2024-12-10 21:56:18.047009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.358 [2024-12-10 21:56:18.047117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.358 [2024-12-10 21:56:18.047160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:10.358 [2024-12-10 21:56:18.047193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:25:10.358 [2024-12-10 21:56:18.047270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.358 [2024-12-10 21:56:18.048694] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 394.488 ms, result 0 00:25:11.736  [2024-12-10T21:56:20.406Z] Copying: 22/1024 [MB] (22 MBps) [2024-12-10T21:56:21.344Z] Copying: 44/1024 [MB] (21 MBps) [2024-12-10T21:56:22.283Z] Copying: 66/1024 [MB] (21 MBps) [2024-12-10T21:56:23.223Z] Copying: 87/1024 [MB] (21 MBps) [2024-12-10T21:56:24.161Z] Copying: 109/1024 [MB] (21 MBps) [2024-12-10T21:56:25.100Z] Copying: 131/1024 [MB] (21 MBps) [2024-12-10T21:56:26.479Z] Copying: 153/1024 [MB] (21 MBps) [2024-12-10T21:56:27.048Z] Copying: 175/1024 [MB] (21 MBps) [2024-12-10T21:56:28.428Z] Copying: 197/1024 [MB] (22 MBps) [2024-12-10T21:56:29.367Z] Copying: 220/1024 [MB] (22 MBps) [2024-12-10T21:56:30.305Z] Copying: 242/1024 [MB] (22 MBps) [2024-12-10T21:56:31.243Z] Copying: 263/1024 [MB] (20 MBps) [2024-12-10T21:56:32.181Z] Copying: 285/1024 [MB] (21 MBps) [2024-12-10T21:56:33.119Z] Copying: 307/1024 [MB] (21 MBps) [2024-12-10T21:56:34.058Z] Copying: 329/1024 [MB] (21 MBps) [2024-12-10T21:56:35.436Z] Copying: 351/1024 [MB] (22 MBps) [2024-12-10T21:56:36.371Z] Copying: 373/1024 [MB] (21 MBps) [2024-12-10T21:56:37.375Z] Copying: 394/1024 [MB] (21 MBps) [2024-12-10T21:56:38.314Z] Copying: 416/1024 [MB] (21 MBps) [2024-12-10T21:56:39.252Z] Copying: 437/1024 [MB] (21 MBps) [2024-12-10T21:56:40.189Z] Copying: 460/1024 [MB] (22 MBps) [2024-12-10T21:56:41.126Z] Copying: 482/1024 [MB] (22 MBps) [2024-12-10T21:56:42.064Z] Copying: 505/1024 [MB] (22 MBps) [2024-12-10T21:56:43.444Z] Copying: 527/1024 [MB] (22 MBps) [2024-12-10T21:56:44.381Z] Copying: 551/1024 [MB] (23 MBps) [2024-12-10T21:56:45.323Z] Copying: 574/1024 [MB] (23 MBps) [2024-12-10T21:56:46.261Z] Copying: 597/1024 [MB] (23 MBps) [2024-12-10T21:56:47.199Z] Copying: 620/1024 [MB] (22 MBps) [2024-12-10T21:56:48.137Z] Copying: 642/1024 [MB] (22 MBps) [2024-12-10T21:56:49.075Z] Copying: 666/1024 [MB] (23 MBps) [2024-12-10T21:56:50.013Z] Copying: 690/1024 [MB] (23 MBps) [2024-12-10T21:56:51.392Z] Copying: 713/1024 [MB] (23 MBps) [2024-12-10T21:56:52.331Z] Copying: 737/1024 [MB] (23 MBps) [2024-12-10T21:56:53.269Z] Copying: 760/1024 [MB] (23 MBps) [2024-12-10T21:56:54.207Z] Copying: 783/1024 [MB] (22 MBps) [2024-12-10T21:56:55.145Z] Copying: 805/1024 [MB] (22 MBps) [2024-12-10T21:56:56.083Z] Copying: 829/1024 [MB] (24 MBps) [2024-12-10T21:56:57.021Z] Copying: 854/1024 [MB] (24 MBps) [2024-12-10T21:56:58.400Z] Copying: 878/1024 [MB] (24 MBps) [2024-12-10T21:56:59.337Z] Copying: 901/1024 [MB] (23 MBps) [2024-12-10T21:57:00.337Z] Copying: 923/1024 [MB] (22 MBps) [2024-12-10T21:57:01.276Z] Copying: 947/1024 [MB] (23 MBps) [2024-12-10T21:57:02.213Z] Copying: 972/1024 [MB] (24 MBps) [2024-12-10T21:57:03.151Z] Copying: 996/1024 [MB] (24 MBps) [2024-12-10T21:57:03.412Z] Copying: 1019/1024 [MB] (23 MBps) [2024-12-10T21:57:03.412Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-12-10 21:57:03.164214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.681 [2024-12-10 21:57:03.164273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:55.681 [2024-12-10 21:57:03.164291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:25:55.681 [2024-12-10 21:57:03.164302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.681 [2024-12-10 21:57:03.164325] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:55.681 [2024-12-10 21:57:03.168956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.681 [2024-12-10 21:57:03.168992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:55.681 [2024-12-10 21:57:03.169005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.620 ms 00:25:55.681 [2024-12-10 21:57:03.169022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.681 [2024-12-10 21:57:03.170895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.681 [2024-12-10 21:57:03.170938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:55.681 [2024-12-10 21:57:03.170951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.849 ms 00:25:55.681 [2024-12-10 21:57:03.170962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.681 [2024-12-10 21:57:03.189561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.681 [2024-12-10 21:57:03.189620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:55.681 [2024-12-10 21:57:03.189635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.611 ms 00:25:55.681 [2024-12-10 21:57:03.189645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.681 [2024-12-10 21:57:03.194585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.681 [2024-12-10 21:57:03.194622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:55.681 [2024-12-10 21:57:03.194635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.906 ms 00:25:55.681 [2024-12-10 21:57:03.194645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.681 [2024-12-10 21:57:03.230901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.681 [2024-12-10 21:57:03.230937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:55.681 [2024-12-10 21:57:03.230967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.247 ms 00:25:55.681 [2024-12-10 21:57:03.230978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.681 [2024-12-10 21:57:03.251880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.681 [2024-12-10 21:57:03.251926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:55.681 [2024-12-10 21:57:03.251957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.897 ms 00:25:55.681 [2024-12-10 21:57:03.251967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.681 [2024-12-10 21:57:03.252126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.681 [2024-12-10 21:57:03.252144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:55.681 [2024-12-10 21:57:03.252156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:25:55.681 [2024-12-10 21:57:03.252166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.681 [2024-12-10 21:57:03.287862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.681 [2024-12-10 21:57:03.287893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:55.681 [2024-12-10 21:57:03.287921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.737 ms 00:25:55.681 [2024-12-10 21:57:03.287932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.681 [2024-12-10 21:57:03.323063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.681 [2024-12-10 21:57:03.323098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:55.681 [2024-12-10 21:57:03.323127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.150 ms 00:25:55.681 [2024-12-10 21:57:03.323136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.681 [2024-12-10 21:57:03.357259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.681 [2024-12-10 21:57:03.357301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:55.681 [2024-12-10 21:57:03.357314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.140 ms 00:25:55.681 [2024-12-10 21:57:03.357323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.681 [2024-12-10 21:57:03.391089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.681 [2024-12-10 21:57:03.391122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:55.681 [2024-12-10 21:57:03.391135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.730 ms 00:25:55.681 [2024-12-10 21:57:03.391145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.681 [2024-12-10 21:57:03.391180] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:55.681 [2024-12-10 21:57:03.391196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:55.681 [2024-12-10 21:57:03.391778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.391788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.391799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.391809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.391820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.391832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.391843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.391853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.391863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.391874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.391884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.391894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.391905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.391915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.391926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.391938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.391948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.391958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.391969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.391980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.391991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.392001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.392011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.392021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.392031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.392041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.392050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.392060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.392080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.392091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.392119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.392131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.392142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.392152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.392163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.392173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.392184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.392196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.392207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.392217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.392227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.392238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.392248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.392259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.392270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:55.682 [2024-12-10 21:57:03.392288] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:55.682 [2024-12-10 21:57:03.392303] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 94ce3722-1699-4a2c-9d85-909c122158e6 00:25:55.682 [2024-12-10 21:57:03.392314] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:55.682 [2024-12-10 21:57:03.392324] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:55.682 [2024-12-10 21:57:03.392334] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:55.682 [2024-12-10 21:57:03.392344] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:55.682 [2024-12-10 21:57:03.392353] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:55.682 [2024-12-10 21:57:03.392375] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:55.682 [2024-12-10 21:57:03.392384] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:55.682 [2024-12-10 21:57:03.392393] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:55.682 [2024-12-10 21:57:03.392402] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:55.682 [2024-12-10 21:57:03.392412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.682 [2024-12-10 21:57:03.392422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:55.682 [2024-12-10 21:57:03.392432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.234 ms 00:25:55.682 [2024-12-10 21:57:03.392442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.942 [2024-12-10 21:57:03.411903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.942 [2024-12-10 21:57:03.411934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:55.942 [2024-12-10 21:57:03.411947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.455 ms 00:25:55.942 [2024-12-10 21:57:03.411956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.942 [2024-12-10 21:57:03.412562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.942 [2024-12-10 21:57:03.412580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:55.942 [2024-12-10 21:57:03.412592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.587 ms 00:25:55.942 [2024-12-10 21:57:03.412609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.942 [2024-12-10 21:57:03.462570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.942 [2024-12-10 21:57:03.462604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:55.942 [2024-12-10 21:57:03.462617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.942 [2024-12-10 21:57:03.462628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.942 [2024-12-10 21:57:03.462688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.942 [2024-12-10 21:57:03.462699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:55.942 [2024-12-10 21:57:03.462711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.942 [2024-12-10 21:57:03.462727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.942 [2024-12-10 21:57:03.462811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.942 [2024-12-10 21:57:03.462825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:55.942 [2024-12-10 21:57:03.462836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.942 [2024-12-10 21:57:03.462846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.942 [2024-12-10 21:57:03.462863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.942 [2024-12-10 21:57:03.462874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:55.942 [2024-12-10 21:57:03.462885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.942 [2024-12-10 21:57:03.462895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.942 [2024-12-10 21:57:03.586570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.942 [2024-12-10 21:57:03.586618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:55.942 [2024-12-10 21:57:03.586634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.942 [2024-12-10 21:57:03.586644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.201 [2024-12-10 21:57:03.684450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.201 [2024-12-10 21:57:03.684496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:56.201 [2024-12-10 21:57:03.684512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.201 [2024-12-10 21:57:03.684529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.201 [2024-12-10 21:57:03.684633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.201 [2024-12-10 21:57:03.684645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:56.201 [2024-12-10 21:57:03.684656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.201 [2024-12-10 21:57:03.684666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.201 [2024-12-10 21:57:03.684705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.201 [2024-12-10 21:57:03.684716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:56.201 [2024-12-10 21:57:03.684726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.201 [2024-12-10 21:57:03.684736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.201 [2024-12-10 21:57:03.684856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.201 [2024-12-10 21:57:03.684869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:56.201 [2024-12-10 21:57:03.684879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.201 [2024-12-10 21:57:03.684889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.201 [2024-12-10 21:57:03.684926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.201 [2024-12-10 21:57:03.684939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:56.201 [2024-12-10 21:57:03.684949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.201 [2024-12-10 21:57:03.684959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.201 [2024-12-10 21:57:03.684998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.201 [2024-12-10 21:57:03.685013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:56.202 [2024-12-10 21:57:03.685023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.202 [2024-12-10 21:57:03.685033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.202 [2024-12-10 21:57:03.685117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.202 [2024-12-10 21:57:03.685131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:56.202 [2024-12-10 21:57:03.685142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.202 [2024-12-10 21:57:03.685152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.202 [2024-12-10 21:57:03.685286] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 521.885 ms, result 0 00:25:57.139 00:25:57.139 00:25:57.139 21:57:04 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:25:57.139 [2024-12-10 21:57:04.856517] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:25:57.139 [2024-12-10 21:57:04.856670] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81195 ] 00:25:57.398 [2024-12-10 21:57:05.038463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.656 [2024-12-10 21:57:05.150227] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:57.915 [2024-12-10 21:57:05.514694] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:57.915 [2024-12-10 21:57:05.514767] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:58.175 [2024-12-10 21:57:05.677723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.175 [2024-12-10 21:57:05.677780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:58.175 [2024-12-10 21:57:05.677795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:58.175 [2024-12-10 21:57:05.677822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.175 [2024-12-10 21:57:05.677869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.175 [2024-12-10 21:57:05.677886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:58.175 [2024-12-10 21:57:05.677897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:25:58.175 [2024-12-10 21:57:05.677916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.175 [2024-12-10 21:57:05.677938] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:58.175 [2024-12-10 21:57:05.678954] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:58.175 [2024-12-10 21:57:05.678984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.175 [2024-12-10 21:57:05.678996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:58.175 [2024-12-10 21:57:05.679007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.052 ms 00:25:58.175 [2024-12-10 21:57:05.679017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.175 [2024-12-10 21:57:05.680867] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:58.175 [2024-12-10 21:57:05.699095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.175 [2024-12-10 21:57:05.699138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:58.175 [2024-12-10 21:57:05.699154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.258 ms 00:25:58.175 [2024-12-10 21:57:05.699180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.175 [2024-12-10 21:57:05.699253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.175 [2024-12-10 21:57:05.699267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:58.175 [2024-12-10 21:57:05.699279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:25:58.175 [2024-12-10 21:57:05.699290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.175 [2024-12-10 21:57:05.707780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.175 [2024-12-10 21:57:05.707813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:58.175 [2024-12-10 21:57:05.707826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.428 ms 00:25:58.175 [2024-12-10 21:57:05.707841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.175 [2024-12-10 21:57:05.707928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.175 [2024-12-10 21:57:05.707942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:58.175 [2024-12-10 21:57:05.707954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:25:58.175 [2024-12-10 21:57:05.707965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.175 [2024-12-10 21:57:05.708005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.175 [2024-12-10 21:57:05.708017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:58.175 [2024-12-10 21:57:05.708027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:58.175 [2024-12-10 21:57:05.708038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.175 [2024-12-10 21:57:05.708084] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:58.175 [2024-12-10 21:57:05.712989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.175 [2024-12-10 21:57:05.713023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:58.175 [2024-12-10 21:57:05.713039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.917 ms 00:25:58.175 [2024-12-10 21:57:05.713058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.175 [2024-12-10 21:57:05.713094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.175 [2024-12-10 21:57:05.713107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:58.175 [2024-12-10 21:57:05.713117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:58.175 [2024-12-10 21:57:05.713127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.175 [2024-12-10 21:57:05.713179] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:58.175 [2024-12-10 21:57:05.713217] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:58.175 [2024-12-10 21:57:05.713254] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:58.175 [2024-12-10 21:57:05.713275] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:58.175 [2024-12-10 21:57:05.713366] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:58.175 [2024-12-10 21:57:05.713380] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:58.175 [2024-12-10 21:57:05.713394] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:58.175 [2024-12-10 21:57:05.713408] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:58.175 [2024-12-10 21:57:05.713420] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:58.175 [2024-12-10 21:57:05.713433] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:58.176 [2024-12-10 21:57:05.713444] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:58.176 [2024-12-10 21:57:05.713453] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:58.176 [2024-12-10 21:57:05.713468] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:58.176 [2024-12-10 21:57:05.713478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.176 [2024-12-10 21:57:05.713489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:58.176 [2024-12-10 21:57:05.713499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:25:58.176 [2024-12-10 21:57:05.713510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.176 [2024-12-10 21:57:05.713581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.176 [2024-12-10 21:57:05.713594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:58.176 [2024-12-10 21:57:05.713603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:58.176 [2024-12-10 21:57:05.713614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.176 [2024-12-10 21:57:05.713704] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:58.176 [2024-12-10 21:57:05.713724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:58.176 [2024-12-10 21:57:05.713735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:58.176 [2024-12-10 21:57:05.713746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:58.176 [2024-12-10 21:57:05.713756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:58.176 [2024-12-10 21:57:05.713767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:58.176 [2024-12-10 21:57:05.713777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:58.176 [2024-12-10 21:57:05.713786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:58.176 [2024-12-10 21:57:05.713795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:58.176 [2024-12-10 21:57:05.713804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:58.176 [2024-12-10 21:57:05.713813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:58.176 [2024-12-10 21:57:05.713824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:58.176 [2024-12-10 21:57:05.713834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:58.176 [2024-12-10 21:57:05.713856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:58.176 [2024-12-10 21:57:05.713866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:58.176 [2024-12-10 21:57:05.713875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:58.176 [2024-12-10 21:57:05.713885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:58.176 [2024-12-10 21:57:05.713895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:58.176 [2024-12-10 21:57:05.713904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:58.176 [2024-12-10 21:57:05.713913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:58.176 [2024-12-10 21:57:05.713923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:58.176 [2024-12-10 21:57:05.713932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:58.176 [2024-12-10 21:57:05.713941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:58.176 [2024-12-10 21:57:05.713951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:58.176 [2024-12-10 21:57:05.713960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:58.176 [2024-12-10 21:57:05.713970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:58.176 [2024-12-10 21:57:05.713980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:58.176 [2024-12-10 21:57:05.713989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:58.176 [2024-12-10 21:57:05.713998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:58.176 [2024-12-10 21:57:05.714008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:58.176 [2024-12-10 21:57:05.714017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:58.176 [2024-12-10 21:57:05.714026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:58.176 [2024-12-10 21:57:05.714036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:58.176 [2024-12-10 21:57:05.714045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:58.176 [2024-12-10 21:57:05.714065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:58.176 [2024-12-10 21:57:05.714075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:58.176 [2024-12-10 21:57:05.714084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:58.176 [2024-12-10 21:57:05.714093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:58.176 [2024-12-10 21:57:05.714103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:58.176 [2024-12-10 21:57:05.714128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:58.176 [2024-12-10 21:57:05.714138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:58.176 [2024-12-10 21:57:05.714149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:58.176 [2024-12-10 21:57:05.714159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:58.176 [2024-12-10 21:57:05.714169] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:58.176 [2024-12-10 21:57:05.714180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:58.176 [2024-12-10 21:57:05.714191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:58.176 [2024-12-10 21:57:05.714201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:58.176 [2024-12-10 21:57:05.714211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:58.176 [2024-12-10 21:57:05.714221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:58.176 [2024-12-10 21:57:05.714231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:58.176 [2024-12-10 21:57:05.714241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:58.176 [2024-12-10 21:57:05.714250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:58.176 [2024-12-10 21:57:05.714259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:58.176 [2024-12-10 21:57:05.714271] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:58.176 [2024-12-10 21:57:05.714283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:58.176 [2024-12-10 21:57:05.714301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:58.176 [2024-12-10 21:57:05.714311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:58.176 [2024-12-10 21:57:05.714323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:58.176 [2024-12-10 21:57:05.714334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:58.176 [2024-12-10 21:57:05.714344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:58.176 [2024-12-10 21:57:05.714355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:58.176 [2024-12-10 21:57:05.714367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:58.176 [2024-12-10 21:57:05.714378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:58.176 [2024-12-10 21:57:05.714390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:58.176 [2024-12-10 21:57:05.714411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:58.176 [2024-12-10 21:57:05.714423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:58.176 [2024-12-10 21:57:05.714434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:58.176 [2024-12-10 21:57:05.714445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:58.176 [2024-12-10 21:57:05.714456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:58.176 [2024-12-10 21:57:05.714467] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:58.176 [2024-12-10 21:57:05.714478] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:58.176 [2024-12-10 21:57:05.714490] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:58.176 [2024-12-10 21:57:05.714501] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:58.176 [2024-12-10 21:57:05.714512] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:58.176 [2024-12-10 21:57:05.714522] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:58.176 [2024-12-10 21:57:05.714534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.176 [2024-12-10 21:57:05.714545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:58.176 [2024-12-10 21:57:05.714555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.885 ms 00:25:58.176 [2024-12-10 21:57:05.714565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.176 [2024-12-10 21:57:05.753185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.176 [2024-12-10 21:57:05.753228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:58.176 [2024-12-10 21:57:05.753242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.632 ms 00:25:58.176 [2024-12-10 21:57:05.753274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.176 [2024-12-10 21:57:05.753358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.176 [2024-12-10 21:57:05.753370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:58.176 [2024-12-10 21:57:05.753381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:58.176 [2024-12-10 21:57:05.753392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.176 [2024-12-10 21:57:05.827134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.176 [2024-12-10 21:57:05.827177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:58.176 [2024-12-10 21:57:05.827192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.796 ms 00:25:58.176 [2024-12-10 21:57:05.827219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.176 [2024-12-10 21:57:05.827262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.176 [2024-12-10 21:57:05.827274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:58.177 [2024-12-10 21:57:05.827290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:58.177 [2024-12-10 21:57:05.827302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.177 [2024-12-10 21:57:05.828114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.177 [2024-12-10 21:57:05.828136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:58.177 [2024-12-10 21:57:05.828148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.737 ms 00:25:58.177 [2024-12-10 21:57:05.828159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.177 [2024-12-10 21:57:05.828282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.177 [2024-12-10 21:57:05.828296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:58.177 [2024-12-10 21:57:05.828311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:25:58.177 [2024-12-10 21:57:05.828321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.177 [2024-12-10 21:57:05.846755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.177 [2024-12-10 21:57:05.846811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:58.177 [2024-12-10 21:57:05.846826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.443 ms 00:25:58.177 [2024-12-10 21:57:05.846853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.177 [2024-12-10 21:57:05.865319] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:58.177 [2024-12-10 21:57:05.865358] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:58.177 [2024-12-10 21:57:05.865373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.177 [2024-12-10 21:57:05.865383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:58.177 [2024-12-10 21:57:05.865395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.436 ms 00:25:58.177 [2024-12-10 21:57:05.865406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.177 [2024-12-10 21:57:05.893618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.177 [2024-12-10 21:57:05.893657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:58.177 [2024-12-10 21:57:05.893671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.212 ms 00:25:58.177 [2024-12-10 21:57:05.893682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.435 [2024-12-10 21:57:05.910692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.436 [2024-12-10 21:57:05.910732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:58.436 [2024-12-10 21:57:05.910745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.975 ms 00:25:58.436 [2024-12-10 21:57:05.910771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.436 [2024-12-10 21:57:05.927877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.436 [2024-12-10 21:57:05.927915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:58.436 [2024-12-10 21:57:05.927927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.091 ms 00:25:58.436 [2024-12-10 21:57:05.927952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.436 [2024-12-10 21:57:05.928700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.436 [2024-12-10 21:57:05.928733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:58.436 [2024-12-10 21:57:05.928749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.608 ms 00:25:58.436 [2024-12-10 21:57:05.928760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.436 [2024-12-10 21:57:06.012308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.436 [2024-12-10 21:57:06.012375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:58.436 [2024-12-10 21:57:06.012399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.656 ms 00:25:58.436 [2024-12-10 21:57:06.012410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.436 [2024-12-10 21:57:06.022642] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:58.436 [2024-12-10 21:57:06.024821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.436 [2024-12-10 21:57:06.024850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:58.436 [2024-12-10 21:57:06.024863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.383 ms 00:25:58.436 [2024-12-10 21:57:06.024874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.436 [2024-12-10 21:57:06.024949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.436 [2024-12-10 21:57:06.024962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:58.436 [2024-12-10 21:57:06.024974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:58.436 [2024-12-10 21:57:06.024989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.436 [2024-12-10 21:57:06.025076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.436 [2024-12-10 21:57:06.025090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:58.436 [2024-12-10 21:57:06.025100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:25:58.436 [2024-12-10 21:57:06.025110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.436 [2024-12-10 21:57:06.025131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.436 [2024-12-10 21:57:06.025142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:58.436 [2024-12-10 21:57:06.025152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:58.436 [2024-12-10 21:57:06.025162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.436 [2024-12-10 21:57:06.025205] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:58.436 [2024-12-10 21:57:06.025218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.436 [2024-12-10 21:57:06.025228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:58.436 [2024-12-10 21:57:06.025239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:58.436 [2024-12-10 21:57:06.025248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.436 [2024-12-10 21:57:06.059788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.436 [2024-12-10 21:57:06.059841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:58.436 [2024-12-10 21:57:06.059863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.559 ms 00:25:58.436 [2024-12-10 21:57:06.059874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.436 [2024-12-10 21:57:06.059948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.436 [2024-12-10 21:57:06.059960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:58.436 [2024-12-10 21:57:06.059971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:25:58.436 [2024-12-10 21:57:06.059983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.436 [2024-12-10 21:57:06.061488] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 383.881 ms, result 0 00:25:59.814  [2024-12-10T21:57:08.482Z] Copying: 24/1024 [MB] (24 MBps) [2024-12-10T21:57:09.420Z] Copying: 48/1024 [MB] (24 MBps) [2024-12-10T21:57:10.357Z] Copying: 72/1024 [MB] (24 MBps) [2024-12-10T21:57:11.295Z] Copying: 97/1024 [MB] (24 MBps) [2024-12-10T21:57:12.674Z] Copying: 122/1024 [MB] (25 MBps) [2024-12-10T21:57:13.612Z] Copying: 147/1024 [MB] (24 MBps) [2024-12-10T21:57:14.549Z] Copying: 170/1024 [MB] (23 MBps) [2024-12-10T21:57:15.486Z] Copying: 194/1024 [MB] (23 MBps) [2024-12-10T21:57:16.423Z] Copying: 217/1024 [MB] (23 MBps) [2024-12-10T21:57:17.361Z] Copying: 241/1024 [MB] (23 MBps) [2024-12-10T21:57:18.300Z] Copying: 265/1024 [MB] (24 MBps) [2024-12-10T21:57:19.680Z] Copying: 289/1024 [MB] (24 MBps) [2024-12-10T21:57:20.618Z] Copying: 314/1024 [MB] (24 MBps) [2024-12-10T21:57:21.572Z] Copying: 339/1024 [MB] (24 MBps) [2024-12-10T21:57:22.580Z] Copying: 363/1024 [MB] (24 MBps) [2024-12-10T21:57:23.518Z] Copying: 388/1024 [MB] (24 MBps) [2024-12-10T21:57:24.455Z] Copying: 413/1024 [MB] (25 MBps) [2024-12-10T21:57:25.392Z] Copying: 438/1024 [MB] (24 MBps) [2024-12-10T21:57:26.329Z] Copying: 463/1024 [MB] (24 MBps) [2024-12-10T21:57:27.267Z] Copying: 487/1024 [MB] (24 MBps) [2024-12-10T21:57:28.645Z] Copying: 511/1024 [MB] (24 MBps) [2024-12-10T21:57:29.583Z] Copying: 536/1024 [MB] (24 MBps) [2024-12-10T21:57:30.524Z] Copying: 561/1024 [MB] (24 MBps) [2024-12-10T21:57:31.462Z] Copying: 585/1024 [MB] (24 MBps) [2024-12-10T21:57:32.400Z] Copying: 611/1024 [MB] (25 MBps) [2024-12-10T21:57:33.338Z] Copying: 636/1024 [MB] (24 MBps) [2024-12-10T21:57:34.277Z] Copying: 661/1024 [MB] (25 MBps) [2024-12-10T21:57:35.657Z] Copying: 686/1024 [MB] (25 MBps) [2024-12-10T21:57:36.597Z] Copying: 710/1024 [MB] (24 MBps) [2024-12-10T21:57:37.536Z] Copying: 734/1024 [MB] (23 MBps) [2024-12-10T21:57:38.474Z] Copying: 758/1024 [MB] (23 MBps) [2024-12-10T21:57:39.411Z] Copying: 782/1024 [MB] (24 MBps) [2024-12-10T21:57:40.350Z] Copying: 805/1024 [MB] (23 MBps) [2024-12-10T21:57:41.289Z] Copying: 828/1024 [MB] (22 MBps) [2024-12-10T21:57:42.228Z] Copying: 851/1024 [MB] (23 MBps) [2024-12-10T21:57:43.610Z] Copying: 875/1024 [MB] (23 MBps) [2024-12-10T21:57:44.549Z] Copying: 897/1024 [MB] (22 MBps) [2024-12-10T21:57:45.515Z] Copying: 920/1024 [MB] (22 MBps) [2024-12-10T21:57:46.454Z] Copying: 943/1024 [MB] (23 MBps) [2024-12-10T21:57:47.393Z] Copying: 966/1024 [MB] (22 MBps) [2024-12-10T21:57:48.332Z] Copying: 989/1024 [MB] (22 MBps) [2024-12-10T21:57:48.901Z] Copying: 1012/1024 [MB] (22 MBps) [2024-12-10T21:57:48.901Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-12-10 21:57:48.769533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.170 [2024-12-10 21:57:48.769701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:41.171 [2024-12-10 21:57:48.769758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:41.171 [2024-12-10 21:57:48.769798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.171 [2024-12-10 21:57:48.769880] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:41.171 [2024-12-10 21:57:48.781472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.171 [2024-12-10 21:57:48.781561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:41.171 [2024-12-10 21:57:48.781593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.545 ms 00:26:41.171 [2024-12-10 21:57:48.781619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.171 [2024-12-10 21:57:48.782085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.171 [2024-12-10 21:57:48.782131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:41.171 [2024-12-10 21:57:48.782159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:26:41.171 [2024-12-10 21:57:48.782183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.171 [2024-12-10 21:57:48.787135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.171 [2024-12-10 21:57:48.787174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:41.171 [2024-12-10 21:57:48.787193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.922 ms 00:26:41.171 [2024-12-10 21:57:48.787219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.171 [2024-12-10 21:57:48.794454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.171 [2024-12-10 21:57:48.794508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:41.171 [2024-12-10 21:57:48.794528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.215 ms 00:26:41.171 [2024-12-10 21:57:48.794545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.171 [2024-12-10 21:57:48.830994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.171 [2024-12-10 21:57:48.831043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:41.171 [2024-12-10 21:57:48.831067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.428 ms 00:26:41.171 [2024-12-10 21:57:48.831095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.171 [2024-12-10 21:57:48.852516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.171 [2024-12-10 21:57:48.852563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:41.171 [2024-12-10 21:57:48.852595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.408 ms 00:26:41.171 [2024-12-10 21:57:48.852608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.171 [2024-12-10 21:57:48.852754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.171 [2024-12-10 21:57:48.852770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:41.171 [2024-12-10 21:57:48.852783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:26:41.171 [2024-12-10 21:57:48.852795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.171 [2024-12-10 21:57:48.887229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.171 [2024-12-10 21:57:48.887274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:41.171 [2024-12-10 21:57:48.887306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.471 ms 00:26:41.171 [2024-12-10 21:57:48.887317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.432 [2024-12-10 21:57:48.922075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.432 [2024-12-10 21:57:48.922115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:41.432 [2024-12-10 21:57:48.922129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.771 ms 00:26:41.432 [2024-12-10 21:57:48.922140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.432 [2024-12-10 21:57:48.957755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.432 [2024-12-10 21:57:48.957799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:41.432 [2024-12-10 21:57:48.957815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.616 ms 00:26:41.432 [2024-12-10 21:57:48.957827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.432 [2024-12-10 21:57:48.991916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.432 [2024-12-10 21:57:48.991959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:41.432 [2024-12-10 21:57:48.991990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.057 ms 00:26:41.432 [2024-12-10 21:57:48.992001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.432 [2024-12-10 21:57:48.992043] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:41.432 [2024-12-10 21:57:48.992086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:41.432 [2024-12-10 21:57:48.992853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:41.433 [2024-12-10 21:57:48.992866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:41.433 [2024-12-10 21:57:48.992878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:41.433 [2024-12-10 21:57:48.992891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:41.433 [2024-12-10 21:57:48.992904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:41.433 [2024-12-10 21:57:48.992917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:41.433 [2024-12-10 21:57:48.992929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:41.433 [2024-12-10 21:57:48.992941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:41.433 [2024-12-10 21:57:48.992954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:41.433 [2024-12-10 21:57:48.992967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:41.433 [2024-12-10 21:57:48.992980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:41.433 [2024-12-10 21:57:48.992993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:41.433 [2024-12-10 21:57:48.993005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:41.433 [2024-12-10 21:57:48.993017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:41.433 [2024-12-10 21:57:48.993030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:41.433 [2024-12-10 21:57:48.993042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:41.433 [2024-12-10 21:57:48.993054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:41.433 [2024-12-10 21:57:48.993075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:41.433 [2024-12-10 21:57:48.993088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:41.433 [2024-12-10 21:57:48.993101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:41.433 [2024-12-10 21:57:48.993114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:41.433 [2024-12-10 21:57:48.993127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:41.433 [2024-12-10 21:57:48.993139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:41.433 [2024-12-10 21:57:48.993151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:41.433 [2024-12-10 21:57:48.993163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:41.433 [2024-12-10 21:57:48.993175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:41.433 [2024-12-10 21:57:48.993188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:41.433 [2024-12-10 21:57:48.993200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:41.433 [2024-12-10 21:57:48.993212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:41.433 [2024-12-10 21:57:48.993224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:41.433 [2024-12-10 21:57:48.993236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:41.433 [2024-12-10 21:57:48.993249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:41.433 [2024-12-10 21:57:48.993261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:41.433 [2024-12-10 21:57:48.993273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:41.433 [2024-12-10 21:57:48.993285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:41.433 [2024-12-10 21:57:48.993298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:41.433 [2024-12-10 21:57:48.993311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:41.433 [2024-12-10 21:57:48.993324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:41.433 [2024-12-10 21:57:48.993336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:41.433 [2024-12-10 21:57:48.993348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:41.433 [2024-12-10 21:57:48.993368] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:41.433 [2024-12-10 21:57:48.993381] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 94ce3722-1699-4a2c-9d85-909c122158e6 00:26:41.433 [2024-12-10 21:57:48.993393] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:41.433 [2024-12-10 21:57:48.993406] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:41.433 [2024-12-10 21:57:48.993418] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:41.433 [2024-12-10 21:57:48.993430] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:41.433 [2024-12-10 21:57:48.993455] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:41.433 [2024-12-10 21:57:48.993467] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:41.433 [2024-12-10 21:57:48.993479] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:41.433 [2024-12-10 21:57:48.993490] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:41.433 [2024-12-10 21:57:48.993500] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:41.433 [2024-12-10 21:57:48.993512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.433 [2024-12-10 21:57:48.993524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:41.433 [2024-12-10 21:57:48.993537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.472 ms 00:26:41.433 [2024-12-10 21:57:48.993553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.433 [2024-12-10 21:57:49.012832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.433 [2024-12-10 21:57:49.012870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:41.433 [2024-12-10 21:57:49.012885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.271 ms 00:26:41.433 [2024-12-10 21:57:49.012897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.433 [2024-12-10 21:57:49.013498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.433 [2024-12-10 21:57:49.013523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:41.433 [2024-12-10 21:57:49.013543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:26:41.433 [2024-12-10 21:57:49.013556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.433 [2024-12-10 21:57:49.062762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.433 [2024-12-10 21:57:49.062805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:41.433 [2024-12-10 21:57:49.062837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.433 [2024-12-10 21:57:49.062850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.433 [2024-12-10 21:57:49.062914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.433 [2024-12-10 21:57:49.062928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:41.433 [2024-12-10 21:57:49.062949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.433 [2024-12-10 21:57:49.062960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.433 [2024-12-10 21:57:49.063057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.433 [2024-12-10 21:57:49.063091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:41.433 [2024-12-10 21:57:49.063105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.433 [2024-12-10 21:57:49.063116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.433 [2024-12-10 21:57:49.063153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.433 [2024-12-10 21:57:49.063167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:41.433 [2024-12-10 21:57:49.063179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.433 [2024-12-10 21:57:49.063197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.693 [2024-12-10 21:57:49.183799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.693 [2024-12-10 21:57:49.183862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:41.693 [2024-12-10 21:57:49.183881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.693 [2024-12-10 21:57:49.183910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.693 [2024-12-10 21:57:49.279571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.693 [2024-12-10 21:57:49.279631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:41.693 [2024-12-10 21:57:49.279656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.693 [2024-12-10 21:57:49.279686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.693 [2024-12-10 21:57:49.279792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.693 [2024-12-10 21:57:49.279806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:41.693 [2024-12-10 21:57:49.279819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.693 [2024-12-10 21:57:49.279831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.693 [2024-12-10 21:57:49.279877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.693 [2024-12-10 21:57:49.279890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:41.693 [2024-12-10 21:57:49.279902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.693 [2024-12-10 21:57:49.279914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.693 [2024-12-10 21:57:49.280054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.693 [2024-12-10 21:57:49.280069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:41.693 [2024-12-10 21:57:49.280102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.693 [2024-12-10 21:57:49.280115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.693 [2024-12-10 21:57:49.280166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.693 [2024-12-10 21:57:49.280181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:41.693 [2024-12-10 21:57:49.280193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.693 [2024-12-10 21:57:49.280205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.693 [2024-12-10 21:57:49.280255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.693 [2024-12-10 21:57:49.280270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:41.693 [2024-12-10 21:57:49.280283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.693 [2024-12-10 21:57:49.280294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.693 [2024-12-10 21:57:49.280343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.693 [2024-12-10 21:57:49.280358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:41.693 [2024-12-10 21:57:49.280370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.693 [2024-12-10 21:57:49.280382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.693 [2024-12-10 21:57:49.280533] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 511.857 ms, result 0 00:26:42.629 00:26:42.629 00:26:42.629 21:57:50 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:44.537 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:44.537 21:57:52 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:26:44.537 [2024-12-10 21:57:52.094020] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:26:44.537 [2024-12-10 21:57:52.094323] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81670 ] 00:26:44.796 [2024-12-10 21:57:52.274895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.796 [2024-12-10 21:57:52.387304] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.056 [2024-12-10 21:57:52.749933] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:45.056 [2024-12-10 21:57:52.750021] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:45.317 [2024-12-10 21:57:52.914632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.317 [2024-12-10 21:57:52.914700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:45.317 [2024-12-10 21:57:52.914718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:45.317 [2024-12-10 21:57:52.914747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.317 [2024-12-10 21:57:52.914802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.317 [2024-12-10 21:57:52.914820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:45.317 [2024-12-10 21:57:52.914833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:26:45.317 [2024-12-10 21:57:52.914845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.317 [2024-12-10 21:57:52.914871] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:45.317 [2024-12-10 21:57:52.915871] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:45.317 [2024-12-10 21:57:52.915904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.317 [2024-12-10 21:57:52.915917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:45.317 [2024-12-10 21:57:52.915931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.040 ms 00:26:45.317 [2024-12-10 21:57:52.915944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.317 [2024-12-10 21:57:52.917463] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:45.317 [2024-12-10 21:57:52.935765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.317 [2024-12-10 21:57:52.935812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:45.317 [2024-12-10 21:57:52.935828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.332 ms 00:26:45.317 [2024-12-10 21:57:52.935856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.317 [2024-12-10 21:57:52.935936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.317 [2024-12-10 21:57:52.935951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:45.317 [2024-12-10 21:57:52.935964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:26:45.317 [2024-12-10 21:57:52.935975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.317 [2024-12-10 21:57:52.944204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.317 [2024-12-10 21:57:52.944237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:45.317 [2024-12-10 21:57:52.944251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.147 ms 00:26:45.317 [2024-12-10 21:57:52.944268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.317 [2024-12-10 21:57:52.944377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.317 [2024-12-10 21:57:52.944392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:45.317 [2024-12-10 21:57:52.944405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:26:45.317 [2024-12-10 21:57:52.944416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.317 [2024-12-10 21:57:52.944463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.317 [2024-12-10 21:57:52.944476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:45.317 [2024-12-10 21:57:52.944488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:45.317 [2024-12-10 21:57:52.944499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.317 [2024-12-10 21:57:52.944533] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:45.317 [2024-12-10 21:57:52.949691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.317 [2024-12-10 21:57:52.949727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:45.317 [2024-12-10 21:57:52.949747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.171 ms 00:26:45.317 [2024-12-10 21:57:52.949759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.317 [2024-12-10 21:57:52.949817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.317 [2024-12-10 21:57:52.949831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:45.317 [2024-12-10 21:57:52.949844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:45.317 [2024-12-10 21:57:52.949855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.317 [2024-12-10 21:57:52.949914] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:45.317 [2024-12-10 21:57:52.949946] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:45.317 [2024-12-10 21:57:52.949983] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:45.317 [2024-12-10 21:57:52.950008] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:45.317 [2024-12-10 21:57:52.950127] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:45.317 [2024-12-10 21:57:52.950145] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:45.317 [2024-12-10 21:57:52.950162] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:45.317 [2024-12-10 21:57:52.950177] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:45.317 [2024-12-10 21:57:52.950192] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:45.317 [2024-12-10 21:57:52.950205] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:45.317 [2024-12-10 21:57:52.950218] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:45.317 [2024-12-10 21:57:52.950230] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:45.317 [2024-12-10 21:57:52.950247] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:45.317 [2024-12-10 21:57:52.950259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.317 [2024-12-10 21:57:52.950272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:45.317 [2024-12-10 21:57:52.950285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:26:45.317 [2024-12-10 21:57:52.950297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.317 [2024-12-10 21:57:52.950377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.317 [2024-12-10 21:57:52.950391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:45.317 [2024-12-10 21:57:52.950403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:26:45.317 [2024-12-10 21:57:52.950423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.317 [2024-12-10 21:57:52.950522] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:45.317 [2024-12-10 21:57:52.950538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:45.317 [2024-12-10 21:57:52.950551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:45.317 [2024-12-10 21:57:52.950564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:45.317 [2024-12-10 21:57:52.950576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:45.317 [2024-12-10 21:57:52.950588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:45.317 [2024-12-10 21:57:52.950599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:45.317 [2024-12-10 21:57:52.950611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:45.317 [2024-12-10 21:57:52.950622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:45.317 [2024-12-10 21:57:52.950634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:45.317 [2024-12-10 21:57:52.950646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:45.317 [2024-12-10 21:57:52.950659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:45.318 [2024-12-10 21:57:52.950671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:45.318 [2024-12-10 21:57:52.950696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:45.318 [2024-12-10 21:57:52.950708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:45.318 [2024-12-10 21:57:52.950720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:45.318 [2024-12-10 21:57:52.950731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:45.318 [2024-12-10 21:57:52.950743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:45.318 [2024-12-10 21:57:52.950755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:45.318 [2024-12-10 21:57:52.950766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:45.318 [2024-12-10 21:57:52.950778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:45.318 [2024-12-10 21:57:52.950790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:45.318 [2024-12-10 21:57:52.950802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:45.318 [2024-12-10 21:57:52.950812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:45.318 [2024-12-10 21:57:52.950824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:45.318 [2024-12-10 21:57:52.950835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:45.318 [2024-12-10 21:57:52.950846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:45.318 [2024-12-10 21:57:52.950857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:45.318 [2024-12-10 21:57:52.950867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:45.318 [2024-12-10 21:57:52.950879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:45.318 [2024-12-10 21:57:52.950890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:45.318 [2024-12-10 21:57:52.950901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:45.318 [2024-12-10 21:57:52.950913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:45.318 [2024-12-10 21:57:52.950923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:45.318 [2024-12-10 21:57:52.950935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:45.318 [2024-12-10 21:57:52.950946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:45.318 [2024-12-10 21:57:52.950957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:45.318 [2024-12-10 21:57:52.950968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:45.318 [2024-12-10 21:57:52.950979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:45.318 [2024-12-10 21:57:52.950989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:45.318 [2024-12-10 21:57:52.951000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:45.318 [2024-12-10 21:57:52.951011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:45.318 [2024-12-10 21:57:52.951022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:45.318 [2024-12-10 21:57:52.951034] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:45.318 [2024-12-10 21:57:52.951046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:45.318 [2024-12-10 21:57:52.951070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:45.318 [2024-12-10 21:57:52.951082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:45.318 [2024-12-10 21:57:52.951095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:45.318 [2024-12-10 21:57:52.951107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:45.318 [2024-12-10 21:57:52.951118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:45.318 [2024-12-10 21:57:52.951130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:45.318 [2024-12-10 21:57:52.951141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:45.318 [2024-12-10 21:57:52.951153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:45.318 [2024-12-10 21:57:52.951166] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:45.318 [2024-12-10 21:57:52.951180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:45.318 [2024-12-10 21:57:52.951199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:45.318 [2024-12-10 21:57:52.951212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:45.318 [2024-12-10 21:57:52.951225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:45.318 [2024-12-10 21:57:52.951238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:45.318 [2024-12-10 21:57:52.951250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:45.318 [2024-12-10 21:57:52.951262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:45.318 [2024-12-10 21:57:52.951275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:45.318 [2024-12-10 21:57:52.951287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:45.318 [2024-12-10 21:57:52.951299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:45.318 [2024-12-10 21:57:52.951311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:45.318 [2024-12-10 21:57:52.951323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:45.318 [2024-12-10 21:57:52.951335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:45.318 [2024-12-10 21:57:52.951348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:45.318 [2024-12-10 21:57:52.951361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:45.318 [2024-12-10 21:57:52.951373] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:45.318 [2024-12-10 21:57:52.951385] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:45.318 [2024-12-10 21:57:52.951398] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:45.318 [2024-12-10 21:57:52.951411] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:45.318 [2024-12-10 21:57:52.951423] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:45.318 [2024-12-10 21:57:52.951435] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:45.318 [2024-12-10 21:57:52.951448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.318 [2024-12-10 21:57:52.951461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:45.318 [2024-12-10 21:57:52.951473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.983 ms 00:26:45.318 [2024-12-10 21:57:52.951485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.318 [2024-12-10 21:57:52.992727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.318 [2024-12-10 21:57:52.992770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:45.318 [2024-12-10 21:57:52.992803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.252 ms 00:26:45.318 [2024-12-10 21:57:52.992821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.318 [2024-12-10 21:57:52.992905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.318 [2024-12-10 21:57:52.992918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:45.318 [2024-12-10 21:57:52.992931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:26:45.318 [2024-12-10 21:57:52.992944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.318 [2024-12-10 21:57:53.044622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.318 [2024-12-10 21:57:53.044666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:45.318 [2024-12-10 21:57:53.044682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.696 ms 00:26:45.318 [2024-12-10 21:57:53.044710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.318 [2024-12-10 21:57:53.044749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.318 [2024-12-10 21:57:53.044763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:45.318 [2024-12-10 21:57:53.044782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:45.318 [2024-12-10 21:57:53.044794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.318 [2024-12-10 21:57:53.045335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.318 [2024-12-10 21:57:53.045362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:45.318 [2024-12-10 21:57:53.045376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:26:45.318 [2024-12-10 21:57:53.045388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.318 [2024-12-10 21:57:53.045520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.318 [2024-12-10 21:57:53.045537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:45.318 [2024-12-10 21:57:53.045555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:26:45.318 [2024-12-10 21:57:53.045567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.579 [2024-12-10 21:57:53.065006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.579 [2024-12-10 21:57:53.065076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:45.579 [2024-12-10 21:57:53.065093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.445 ms 00:26:45.579 [2024-12-10 21:57:53.065106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.579 [2024-12-10 21:57:53.084867] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:45.579 [2024-12-10 21:57:53.084912] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:45.579 [2024-12-10 21:57:53.084931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.579 [2024-12-10 21:57:53.084944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:45.579 [2024-12-10 21:57:53.084958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.738 ms 00:26:45.579 [2024-12-10 21:57:53.084971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.579 [2024-12-10 21:57:53.114112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.579 [2024-12-10 21:57:53.114161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:45.579 [2024-12-10 21:57:53.114177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.137 ms 00:26:45.579 [2024-12-10 21:57:53.114189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.579 [2024-12-10 21:57:53.132196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.579 [2024-12-10 21:57:53.132243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:45.579 [2024-12-10 21:57:53.132259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.946 ms 00:26:45.579 [2024-12-10 21:57:53.132273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.579 [2024-12-10 21:57:53.150147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.579 [2024-12-10 21:57:53.150192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:45.579 [2024-12-10 21:57:53.150225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.858 ms 00:26:45.579 [2024-12-10 21:57:53.150238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.579 [2024-12-10 21:57:53.151020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.579 [2024-12-10 21:57:53.151069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:45.579 [2024-12-10 21:57:53.151090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.653 ms 00:26:45.579 [2024-12-10 21:57:53.151102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.579 [2024-12-10 21:57:53.236619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.579 [2024-12-10 21:57:53.236694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:45.579 [2024-12-10 21:57:53.236739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.626 ms 00:26:45.579 [2024-12-10 21:57:53.236752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.579 [2024-12-10 21:57:53.247291] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:45.579 [2024-12-10 21:57:53.249700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.579 [2024-12-10 21:57:53.249733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:45.579 [2024-12-10 21:57:53.249766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.913 ms 00:26:45.579 [2024-12-10 21:57:53.249779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.579 [2024-12-10 21:57:53.249869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.579 [2024-12-10 21:57:53.249884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:45.579 [2024-12-10 21:57:53.249898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:45.579 [2024-12-10 21:57:53.249915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.579 [2024-12-10 21:57:53.249998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.579 [2024-12-10 21:57:53.250012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:45.579 [2024-12-10 21:57:53.250025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:26:45.579 [2024-12-10 21:57:53.250036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.579 [2024-12-10 21:57:53.250097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.579 [2024-12-10 21:57:53.250113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:45.579 [2024-12-10 21:57:53.250125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:26:45.579 [2024-12-10 21:57:53.250138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.579 [2024-12-10 21:57:53.250189] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:45.579 [2024-12-10 21:57:53.250204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.579 [2024-12-10 21:57:53.250216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:45.579 [2024-12-10 21:57:53.250229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:26:45.579 [2024-12-10 21:57:53.250242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.579 [2024-12-10 21:57:53.285599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.579 [2024-12-10 21:57:53.285646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:45.579 [2024-12-10 21:57:53.285688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.357 ms 00:26:45.579 [2024-12-10 21:57:53.285701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.579 [2024-12-10 21:57:53.285782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.579 [2024-12-10 21:57:53.285797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:45.579 [2024-12-10 21:57:53.285810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:26:45.579 [2024-12-10 21:57:53.285822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.579 [2024-12-10 21:57:53.287077] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 372.518 ms, result 0 00:26:46.961  [2024-12-10T21:57:55.629Z] Copying: 22/1024 [MB] (22 MBps) [2024-12-10T21:57:56.568Z] Copying: 44/1024 [MB] (22 MBps) [2024-12-10T21:57:57.507Z] Copying: 66/1024 [MB] (22 MBps) [2024-12-10T21:57:58.446Z] Copying: 88/1024 [MB] (21 MBps) [2024-12-10T21:57:59.385Z] Copying: 109/1024 [MB] (21 MBps) [2024-12-10T21:58:00.324Z] Copying: 132/1024 [MB] (22 MBps) [2024-12-10T21:58:01.706Z] Copying: 153/1024 [MB] (21 MBps) [2024-12-10T21:58:02.645Z] Copying: 175/1024 [MB] (21 MBps) [2024-12-10T21:58:03.585Z] Copying: 197/1024 [MB] (21 MBps) [2024-12-10T21:58:04.525Z] Copying: 219/1024 [MB] (22 MBps) [2024-12-10T21:58:05.463Z] Copying: 241/1024 [MB] (22 MBps) [2024-12-10T21:58:06.401Z] Copying: 264/1024 [MB] (22 MBps) [2024-12-10T21:58:07.363Z] Copying: 287/1024 [MB] (22 MBps) [2024-12-10T21:58:08.307Z] Copying: 309/1024 [MB] (21 MBps) [2024-12-10T21:58:09.687Z] Copying: 332/1024 [MB] (23 MBps) [2024-12-10T21:58:10.626Z] Copying: 355/1024 [MB] (22 MBps) [2024-12-10T21:58:11.565Z] Copying: 378/1024 [MB] (22 MBps) [2024-12-10T21:58:12.504Z] Copying: 401/1024 [MB] (22 MBps) [2024-12-10T21:58:13.443Z] Copying: 422/1024 [MB] (21 MBps) [2024-12-10T21:58:14.381Z] Copying: 445/1024 [MB] (22 MBps) [2024-12-10T21:58:15.319Z] Copying: 468/1024 [MB] (22 MBps) [2024-12-10T21:58:16.699Z] Copying: 490/1024 [MB] (22 MBps) [2024-12-10T21:58:17.268Z] Copying: 513/1024 [MB] (22 MBps) [2024-12-10T21:58:18.647Z] Copying: 534/1024 [MB] (21 MBps) [2024-12-10T21:58:19.587Z] Copying: 557/1024 [MB] (22 MBps) [2024-12-10T21:58:20.526Z] Copying: 579/1024 [MB] (22 MBps) [2024-12-10T21:58:21.465Z] Copying: 601/1024 [MB] (21 MBps) [2024-12-10T21:58:22.406Z] Copying: 623/1024 [MB] (22 MBps) [2024-12-10T21:58:23.345Z] Copying: 645/1024 [MB] (21 MBps) [2024-12-10T21:58:24.285Z] Copying: 667/1024 [MB] (22 MBps) [2024-12-10T21:58:25.665Z] Copying: 690/1024 [MB] (22 MBps) [2024-12-10T21:58:26.604Z] Copying: 711/1024 [MB] (21 MBps) [2024-12-10T21:58:27.545Z] Copying: 733/1024 [MB] (21 MBps) [2024-12-10T21:58:28.485Z] Copying: 755/1024 [MB] (22 MBps) [2024-12-10T21:58:29.423Z] Copying: 777/1024 [MB] (21 MBps) [2024-12-10T21:58:30.378Z] Copying: 799/1024 [MB] (21 MBps) [2024-12-10T21:58:31.381Z] Copying: 821/1024 [MB] (22 MBps) [2024-12-10T21:58:32.318Z] Copying: 842/1024 [MB] (21 MBps) [2024-12-10T21:58:33.255Z] Copying: 864/1024 [MB] (21 MBps) [2024-12-10T21:58:34.634Z] Copying: 886/1024 [MB] (22 MBps) [2024-12-10T21:58:35.571Z] Copying: 908/1024 [MB] (21 MBps) [2024-12-10T21:58:36.506Z] Copying: 931/1024 [MB] (22 MBps) [2024-12-10T21:58:37.443Z] Copying: 953/1024 [MB] (22 MBps) [2024-12-10T21:58:38.380Z] Copying: 975/1024 [MB] (22 MBps) [2024-12-10T21:58:39.318Z] Copying: 998/1024 [MB] (22 MBps) [2024-12-10T21:58:40.256Z] Copying: 1021/1024 [MB] (22 MBps) [2024-12-10T21:58:40.256Z] Copying: 1024/1024 [MB] (average 21 MBps)[2024-12-10 21:58:40.123276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.525 [2024-12-10 21:58:40.123370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:32.525 [2024-12-10 21:58:40.123403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:32.525 [2024-12-10 21:58:40.123416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.525 [2024-12-10 21:58:40.125260] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:32.525 [2024-12-10 21:58:40.131019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.525 [2024-12-10 21:58:40.131081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:32.525 [2024-12-10 21:58:40.131098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.725 ms 00:27:32.525 [2024-12-10 21:58:40.131127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.525 [2024-12-10 21:58:40.141955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.525 [2024-12-10 21:58:40.142003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:32.525 [2024-12-10 21:58:40.142019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.146 ms 00:27:32.525 [2024-12-10 21:58:40.142057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.525 [2024-12-10 21:58:40.166363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.525 [2024-12-10 21:58:40.166434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:32.525 [2024-12-10 21:58:40.166450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.293 ms 00:27:32.525 [2024-12-10 21:58:40.166492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.525 [2024-12-10 21:58:40.171349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.525 [2024-12-10 21:58:40.171386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:32.525 [2024-12-10 21:58:40.171419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.825 ms 00:27:32.525 [2024-12-10 21:58:40.171441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.525 [2024-12-10 21:58:40.206803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.525 [2024-12-10 21:58:40.206846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:32.525 [2024-12-10 21:58:40.206861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.378 ms 00:27:32.525 [2024-12-10 21:58:40.206889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.525 [2024-12-10 21:58:40.227791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.526 [2024-12-10 21:58:40.227831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:32.526 [2024-12-10 21:58:40.227864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.893 ms 00:27:32.526 [2024-12-10 21:58:40.227876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.786 [2024-12-10 21:58:40.364399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.786 [2024-12-10 21:58:40.364461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:32.786 [2024-12-10 21:58:40.364478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 136.697 ms 00:27:32.786 [2024-12-10 21:58:40.364490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.786 [2024-12-10 21:58:40.399038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.786 [2024-12-10 21:58:40.399084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:32.786 [2024-12-10 21:58:40.399099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.584 ms 00:27:32.786 [2024-12-10 21:58:40.399126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.786 [2024-12-10 21:58:40.433986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.786 [2024-12-10 21:58:40.434024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:32.786 [2024-12-10 21:58:40.434039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.873 ms 00:27:32.786 [2024-12-10 21:58:40.434058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.786 [2024-12-10 21:58:40.467734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.786 [2024-12-10 21:58:40.467774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:32.786 [2024-12-10 21:58:40.467805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.672 ms 00:27:32.786 [2024-12-10 21:58:40.467817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.786 [2024-12-10 21:58:40.501821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.786 [2024-12-10 21:58:40.501870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:32.786 [2024-12-10 21:58:40.501902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.974 ms 00:27:32.786 [2024-12-10 21:58:40.501913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.786 [2024-12-10 21:58:40.501959] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:32.786 [2024-12-10 21:58:40.501978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 112384 / 261120 wr_cnt: 1 state: open 00:27:32.786 [2024-12-10 21:58:40.501993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:32.786 [2024-12-10 21:58:40.502007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:32.786 [2024-12-10 21:58:40.502019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:32.786 [2024-12-10 21:58:40.502032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:32.786 [2024-12-10 21:58:40.502044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:32.786 [2024-12-10 21:58:40.502085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:32.786 [2024-12-10 21:58:40.502098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:32.786 [2024-12-10 21:58:40.502110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:32.786 [2024-12-10 21:58:40.502123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:32.786 [2024-12-10 21:58:40.502136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:32.786 [2024-12-10 21:58:40.502148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:32.786 [2024-12-10 21:58:40.502160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:32.786 [2024-12-10 21:58:40.502172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:32.786 [2024-12-10 21:58:40.502184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:32.786 [2024-12-10 21:58:40.502196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:32.786 [2024-12-10 21:58:40.502208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:32.786 [2024-12-10 21:58:40.502238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:32.786 [2024-12-10 21:58:40.502250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:32.786 [2024-12-10 21:58:40.502263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:32.786 [2024-12-10 21:58:40.502275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:32.786 [2024-12-10 21:58:40.502287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.502993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.503006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.503018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.503040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.503062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.503075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.503088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.503101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.503113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.503126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.503140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.503152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.503165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.503178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.503190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.503202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.503215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.503227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.503240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.503253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.503265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.503278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.503290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.503302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.503315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:32.787 [2024-12-10 21:58:40.503335] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:32.787 [2024-12-10 21:58:40.503347] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 94ce3722-1699-4a2c-9d85-909c122158e6 00:27:32.787 [2024-12-10 21:58:40.503360] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 112384 00:27:32.787 [2024-12-10 21:58:40.503372] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 113344 00:27:32.787 [2024-12-10 21:58:40.503384] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 112384 00:27:32.787 [2024-12-10 21:58:40.503397] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0085 00:27:32.787 [2024-12-10 21:58:40.503428] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:32.787 [2024-12-10 21:58:40.503441] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:32.787 [2024-12-10 21:58:40.503453] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:32.787 [2024-12-10 21:58:40.503464] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:32.787 [2024-12-10 21:58:40.503474] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:32.787 [2024-12-10 21:58:40.503487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.787 [2024-12-10 21:58:40.503499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:32.787 [2024-12-10 21:58:40.503512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.537 ms 00:27:32.787 [2024-12-10 21:58:40.503523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.047 [2024-12-10 21:58:40.523117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.047 [2024-12-10 21:58:40.523151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:33.047 [2024-12-10 21:58:40.523191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.572 ms 00:27:33.047 [2024-12-10 21:58:40.523203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.047 [2024-12-10 21:58:40.523769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.047 [2024-12-10 21:58:40.523795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:33.047 [2024-12-10 21:58:40.523808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:27:33.047 [2024-12-10 21:58:40.523821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.047 [2024-12-10 21:58:40.573881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.047 [2024-12-10 21:58:40.573927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:33.047 [2024-12-10 21:58:40.573942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.047 [2024-12-10 21:58:40.573971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.047 [2024-12-10 21:58:40.574034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.047 [2024-12-10 21:58:40.574048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:33.047 [2024-12-10 21:58:40.574061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.047 [2024-12-10 21:58:40.574083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.047 [2024-12-10 21:58:40.574161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.047 [2024-12-10 21:58:40.574176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:33.047 [2024-12-10 21:58:40.574195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.047 [2024-12-10 21:58:40.574208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.047 [2024-12-10 21:58:40.574228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.047 [2024-12-10 21:58:40.574241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:33.047 [2024-12-10 21:58:40.574254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.047 [2024-12-10 21:58:40.574266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.047 [2024-12-10 21:58:40.696177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.047 [2024-12-10 21:58:40.696253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:33.047 [2024-12-10 21:58:40.696287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.047 [2024-12-10 21:58:40.696301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.307 [2024-12-10 21:58:40.792747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.307 [2024-12-10 21:58:40.792813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:33.307 [2024-12-10 21:58:40.792830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.307 [2024-12-10 21:58:40.792859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.307 [2024-12-10 21:58:40.792973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.307 [2024-12-10 21:58:40.792988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:33.307 [2024-12-10 21:58:40.793001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.307 [2024-12-10 21:58:40.793019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.307 [2024-12-10 21:58:40.793077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.307 [2024-12-10 21:58:40.793091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:33.307 [2024-12-10 21:58:40.793103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.307 [2024-12-10 21:58:40.793116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.307 [2024-12-10 21:58:40.793263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.307 [2024-12-10 21:58:40.793280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:33.307 [2024-12-10 21:58:40.793293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.307 [2024-12-10 21:58:40.793311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.307 [2024-12-10 21:58:40.793360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.307 [2024-12-10 21:58:40.793375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:33.307 [2024-12-10 21:58:40.793388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.307 [2024-12-10 21:58:40.793400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.307 [2024-12-10 21:58:40.793444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.307 [2024-12-10 21:58:40.793458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:33.307 [2024-12-10 21:58:40.793471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.307 [2024-12-10 21:58:40.793482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.307 [2024-12-10 21:58:40.793538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.307 [2024-12-10 21:58:40.793553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:33.307 [2024-12-10 21:58:40.793566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.307 [2024-12-10 21:58:40.793578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.307 [2024-12-10 21:58:40.793737] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 673.977 ms, result 0 00:27:34.688 00:27:34.688 00:27:34.688 21:58:42 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:27:34.688 [2024-12-10 21:58:42.298348] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:27:34.688 [2024-12-10 21:58:42.298702] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82174 ] 00:27:34.949 [2024-12-10 21:58:42.480578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.949 [2024-12-10 21:58:42.593260] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.519 [2024-12-10 21:58:42.962582] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:35.519 [2024-12-10 21:58:42.962686] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:35.519 [2024-12-10 21:58:43.128037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.519 [2024-12-10 21:58:43.128131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:35.519 [2024-12-10 21:58:43.128152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:35.519 [2024-12-10 21:58:43.128164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.519 [2024-12-10 21:58:43.128218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.519 [2024-12-10 21:58:43.128236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:35.519 [2024-12-10 21:58:43.128249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:27:35.519 [2024-12-10 21:58:43.128261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.519 [2024-12-10 21:58:43.128286] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:35.519 [2024-12-10 21:58:43.129266] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:35.519 [2024-12-10 21:58:43.129300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.519 [2024-12-10 21:58:43.129313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:35.519 [2024-12-10 21:58:43.129327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.020 ms 00:27:35.519 [2024-12-10 21:58:43.129339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.520 [2024-12-10 21:58:43.130869] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:35.520 [2024-12-10 21:58:43.149816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.520 [2024-12-10 21:58:43.149864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:35.520 [2024-12-10 21:58:43.149897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.979 ms 00:27:35.520 [2024-12-10 21:58:43.149910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.520 [2024-12-10 21:58:43.149985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.520 [2024-12-10 21:58:43.150000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:35.520 [2024-12-10 21:58:43.150013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:27:35.520 [2024-12-10 21:58:43.150025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.520 [2024-12-10 21:58:43.158910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.520 [2024-12-10 21:58:43.158946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:35.520 [2024-12-10 21:58:43.158977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.797 ms 00:27:35.520 [2024-12-10 21:58:43.158994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.520 [2024-12-10 21:58:43.159086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.520 [2024-12-10 21:58:43.159102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:35.520 [2024-12-10 21:58:43.159115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:27:35.520 [2024-12-10 21:58:43.159127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.520 [2024-12-10 21:58:43.159175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.520 [2024-12-10 21:58:43.159189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:35.520 [2024-12-10 21:58:43.159202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:35.520 [2024-12-10 21:58:43.159214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.520 [2024-12-10 21:58:43.159247] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:35.520 [2024-12-10 21:58:43.164714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.520 [2024-12-10 21:58:43.164751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:35.520 [2024-12-10 21:58:43.164787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.482 ms 00:27:35.520 [2024-12-10 21:58:43.164800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.520 [2024-12-10 21:58:43.164837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.520 [2024-12-10 21:58:43.164851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:35.520 [2024-12-10 21:58:43.164864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:35.520 [2024-12-10 21:58:43.164876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.520 [2024-12-10 21:58:43.164934] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:35.520 [2024-12-10 21:58:43.164962] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:35.520 [2024-12-10 21:58:43.164999] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:35.520 [2024-12-10 21:58:43.165025] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:35.520 [2024-12-10 21:58:43.165151] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:35.520 [2024-12-10 21:58:43.165169] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:35.520 [2024-12-10 21:58:43.165187] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:35.520 [2024-12-10 21:58:43.165202] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:35.520 [2024-12-10 21:58:43.165217] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:35.520 [2024-12-10 21:58:43.165231] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:35.520 [2024-12-10 21:58:43.165244] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:35.520 [2024-12-10 21:58:43.165256] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:35.520 [2024-12-10 21:58:43.165272] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:35.520 [2024-12-10 21:58:43.165286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.520 [2024-12-10 21:58:43.165298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:35.520 [2024-12-10 21:58:43.165310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.355 ms 00:27:35.520 [2024-12-10 21:58:43.165322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.520 [2024-12-10 21:58:43.165401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.520 [2024-12-10 21:58:43.165414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:35.520 [2024-12-10 21:58:43.165427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:27:35.520 [2024-12-10 21:58:43.165439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.520 [2024-12-10 21:58:43.165537] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:35.520 [2024-12-10 21:58:43.165553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:35.520 [2024-12-10 21:58:43.165566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:35.520 [2024-12-10 21:58:43.165578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:35.520 [2024-12-10 21:58:43.165590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:35.520 [2024-12-10 21:58:43.165601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:35.520 [2024-12-10 21:58:43.165613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:35.520 [2024-12-10 21:58:43.165625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:35.520 [2024-12-10 21:58:43.165636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:35.520 [2024-12-10 21:58:43.165647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:35.520 [2024-12-10 21:58:43.165658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:35.520 [2024-12-10 21:58:43.165671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:35.520 [2024-12-10 21:58:43.165683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:35.520 [2024-12-10 21:58:43.165708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:35.520 [2024-12-10 21:58:43.165719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:35.520 [2024-12-10 21:58:43.165730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:35.520 [2024-12-10 21:58:43.165741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:35.520 [2024-12-10 21:58:43.165753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:35.520 [2024-12-10 21:58:43.165765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:35.520 [2024-12-10 21:58:43.165777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:35.520 [2024-12-10 21:58:43.165788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:35.520 [2024-12-10 21:58:43.165799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:35.520 [2024-12-10 21:58:43.165810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:35.520 [2024-12-10 21:58:43.165822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:35.520 [2024-12-10 21:58:43.165833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:35.520 [2024-12-10 21:58:43.165844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:35.520 [2024-12-10 21:58:43.165855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:35.520 [2024-12-10 21:58:43.165866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:35.520 [2024-12-10 21:58:43.165877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:35.520 [2024-12-10 21:58:43.165888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:35.520 [2024-12-10 21:58:43.165898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:35.520 [2024-12-10 21:58:43.165909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:35.520 [2024-12-10 21:58:43.165919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:35.520 [2024-12-10 21:58:43.165930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:35.520 [2024-12-10 21:58:43.165941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:35.520 [2024-12-10 21:58:43.165952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:35.520 [2024-12-10 21:58:43.165963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:35.520 [2024-12-10 21:58:43.165974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:35.520 [2024-12-10 21:58:43.165985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:35.520 [2024-12-10 21:58:43.165995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:35.520 [2024-12-10 21:58:43.166006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:35.520 [2024-12-10 21:58:43.166017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:35.520 [2024-12-10 21:58:43.166028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:35.521 [2024-12-10 21:58:43.166040] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:35.521 [2024-12-10 21:58:43.166070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:35.521 [2024-12-10 21:58:43.166082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:35.521 [2024-12-10 21:58:43.166095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:35.521 [2024-12-10 21:58:43.166106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:35.521 [2024-12-10 21:58:43.166118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:35.521 [2024-12-10 21:58:43.166129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:35.521 [2024-12-10 21:58:43.166141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:35.521 [2024-12-10 21:58:43.166152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:35.521 [2024-12-10 21:58:43.166163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:35.521 [2024-12-10 21:58:43.166177] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:35.521 [2024-12-10 21:58:43.166192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:35.521 [2024-12-10 21:58:43.166212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:35.521 [2024-12-10 21:58:43.166225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:35.521 [2024-12-10 21:58:43.166238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:35.521 [2024-12-10 21:58:43.166250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:35.521 [2024-12-10 21:58:43.166263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:35.521 [2024-12-10 21:58:43.166275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:35.521 [2024-12-10 21:58:43.166288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:35.521 [2024-12-10 21:58:43.166300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:35.521 [2024-12-10 21:58:43.166311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:35.521 [2024-12-10 21:58:43.166323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:35.521 [2024-12-10 21:58:43.166335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:35.521 [2024-12-10 21:58:43.166347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:35.521 [2024-12-10 21:58:43.166359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:35.521 [2024-12-10 21:58:43.166372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:35.521 [2024-12-10 21:58:43.166384] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:35.521 [2024-12-10 21:58:43.166398] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:35.521 [2024-12-10 21:58:43.166411] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:35.521 [2024-12-10 21:58:43.166432] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:35.521 [2024-12-10 21:58:43.166445] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:35.521 [2024-12-10 21:58:43.166457] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:35.521 [2024-12-10 21:58:43.166474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.521 [2024-12-10 21:58:43.166488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:35.521 [2024-12-10 21:58:43.166500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.996 ms 00:27:35.521 [2024-12-10 21:58:43.166512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.521 [2024-12-10 21:58:43.207019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.521 [2024-12-10 21:58:43.207075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:35.521 [2024-12-10 21:58:43.207092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.518 ms 00:27:35.521 [2024-12-10 21:58:43.207110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.521 [2024-12-10 21:58:43.207188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.521 [2024-12-10 21:58:43.207202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:35.521 [2024-12-10 21:58:43.207215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:27:35.521 [2024-12-10 21:58:43.207227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.781 [2024-12-10 21:58:43.256986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.781 [2024-12-10 21:58:43.257031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:35.781 [2024-12-10 21:58:43.257055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.774 ms 00:27:35.781 [2024-12-10 21:58:43.257084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.781 [2024-12-10 21:58:43.257122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.781 [2024-12-10 21:58:43.257135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:35.781 [2024-12-10 21:58:43.257153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:35.781 [2024-12-10 21:58:43.257166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.781 [2024-12-10 21:58:43.258013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.781 [2024-12-10 21:58:43.258040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:35.781 [2024-12-10 21:58:43.258065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.770 ms 00:27:35.781 [2024-12-10 21:58:43.258078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.781 [2024-12-10 21:58:43.258205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.781 [2024-12-10 21:58:43.258221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:35.782 [2024-12-10 21:58:43.258239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:27:35.782 [2024-12-10 21:58:43.258251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.782 [2024-12-10 21:58:43.277753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.782 [2024-12-10 21:58:43.277799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:35.782 [2024-12-10 21:58:43.277814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.507 ms 00:27:35.782 [2024-12-10 21:58:43.277826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.782 [2024-12-10 21:58:43.296747] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:27:35.782 [2024-12-10 21:58:43.296789] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:35.782 [2024-12-10 21:58:43.296807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.782 [2024-12-10 21:58:43.296820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:35.782 [2024-12-10 21:58:43.296851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.882 ms 00:27:35.782 [2024-12-10 21:58:43.296863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.782 [2024-12-10 21:58:43.325156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.782 [2024-12-10 21:58:43.325202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:35.782 [2024-12-10 21:58:43.325218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.290 ms 00:27:35.782 [2024-12-10 21:58:43.325230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.782 [2024-12-10 21:58:43.343170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.782 [2024-12-10 21:58:43.343211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:35.782 [2024-12-10 21:58:43.343226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.902 ms 00:27:35.782 [2024-12-10 21:58:43.343237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.782 [2024-12-10 21:58:43.359803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.782 [2024-12-10 21:58:43.359841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:35.782 [2024-12-10 21:58:43.359856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.534 ms 00:27:35.782 [2024-12-10 21:58:43.359867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.782 [2024-12-10 21:58:43.360678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.782 [2024-12-10 21:58:43.360715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:35.782 [2024-12-10 21:58:43.360734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.670 ms 00:27:35.782 [2024-12-10 21:58:43.360747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.782 [2024-12-10 21:58:43.444178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.782 [2024-12-10 21:58:43.444272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:35.782 [2024-12-10 21:58:43.444301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.536 ms 00:27:35.782 [2024-12-10 21:58:43.444314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.782 [2024-12-10 21:58:43.454468] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:35.782 [2024-12-10 21:58:43.456819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.782 [2024-12-10 21:58:43.456853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:35.782 [2024-12-10 21:58:43.456869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.472 ms 00:27:35.782 [2024-12-10 21:58:43.456881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.782 [2024-12-10 21:58:43.456987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.782 [2024-12-10 21:58:43.457003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:35.782 [2024-12-10 21:58:43.457017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:35.782 [2024-12-10 21:58:43.457035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.782 [2024-12-10 21:58:43.458939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.782 [2024-12-10 21:58:43.458984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:35.782 [2024-12-10 21:58:43.458999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.843 ms 00:27:35.782 [2024-12-10 21:58:43.459012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.782 [2024-12-10 21:58:43.459067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.782 [2024-12-10 21:58:43.459082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:35.782 [2024-12-10 21:58:43.459106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:35.782 [2024-12-10 21:58:43.459118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.782 [2024-12-10 21:58:43.459171] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:35.782 [2024-12-10 21:58:43.459202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.782 [2024-12-10 21:58:43.459215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:35.782 [2024-12-10 21:58:43.459228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:27:35.782 [2024-12-10 21:58:43.459240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.782 [2024-12-10 21:58:43.493592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.782 [2024-12-10 21:58:43.493640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:35.782 [2024-12-10 21:58:43.493681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.378 ms 00:27:35.782 [2024-12-10 21:58:43.493693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.782 [2024-12-10 21:58:43.493779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.782 [2024-12-10 21:58:43.493794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:35.782 [2024-12-10 21:58:43.493807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:27:35.782 [2024-12-10 21:58:43.493819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.782 [2024-12-10 21:58:43.495158] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 367.180 ms, result 0 00:27:37.163  [2024-12-10T21:58:45.832Z] Copying: 20/1024 [MB] (20 MBps) [2024-12-10T21:58:46.771Z] Copying: 42/1024 [MB] (22 MBps) [2024-12-10T21:58:48.155Z] Copying: 65/1024 [MB] (22 MBps) [2024-12-10T21:58:48.724Z] Copying: 88/1024 [MB] (22 MBps) [2024-12-10T21:58:50.105Z] Copying: 110/1024 [MB] (22 MBps) [2024-12-10T21:58:51.043Z] Copying: 133/1024 [MB] (22 MBps) [2024-12-10T21:58:51.982Z] Copying: 155/1024 [MB] (22 MBps) [2024-12-10T21:58:52.935Z] Copying: 178/1024 [MB] (22 MBps) [2024-12-10T21:58:53.946Z] Copying: 201/1024 [MB] (22 MBps) [2024-12-10T21:58:54.884Z] Copying: 225/1024 [MB] (23 MBps) [2024-12-10T21:58:55.821Z] Copying: 248/1024 [MB] (23 MBps) [2024-12-10T21:58:56.759Z] Copying: 272/1024 [MB] (23 MBps) [2024-12-10T21:58:58.138Z] Copying: 295/1024 [MB] (23 MBps) [2024-12-10T21:58:58.706Z] Copying: 319/1024 [MB] (24 MBps) [2024-12-10T21:59:00.084Z] Copying: 342/1024 [MB] (22 MBps) [2024-12-10T21:59:01.020Z] Copying: 367/1024 [MB] (24 MBps) [2024-12-10T21:59:01.958Z] Copying: 392/1024 [MB] (25 MBps) [2024-12-10T21:59:02.895Z] Copying: 417/1024 [MB] (24 MBps) [2024-12-10T21:59:03.832Z] Copying: 441/1024 [MB] (24 MBps) [2024-12-10T21:59:04.769Z] Copying: 466/1024 [MB] (24 MBps) [2024-12-10T21:59:05.706Z] Copying: 491/1024 [MB] (24 MBps) [2024-12-10T21:59:07.086Z] Copying: 515/1024 [MB] (24 MBps) [2024-12-10T21:59:08.024Z] Copying: 540/1024 [MB] (24 MBps) [2024-12-10T21:59:08.960Z] Copying: 564/1024 [MB] (24 MBps) [2024-12-10T21:59:09.897Z] Copying: 589/1024 [MB] (24 MBps) [2024-12-10T21:59:10.835Z] Copying: 614/1024 [MB] (24 MBps) [2024-12-10T21:59:11.772Z] Copying: 639/1024 [MB] (24 MBps) [2024-12-10T21:59:12.709Z] Copying: 663/1024 [MB] (24 MBps) [2024-12-10T21:59:14.088Z] Copying: 688/1024 [MB] (24 MBps) [2024-12-10T21:59:15.025Z] Copying: 712/1024 [MB] (24 MBps) [2024-12-10T21:59:15.976Z] Copying: 736/1024 [MB] (24 MBps) [2024-12-10T21:59:16.984Z] Copying: 761/1024 [MB] (24 MBps) [2024-12-10T21:59:17.920Z] Copying: 784/1024 [MB] (23 MBps) [2024-12-10T21:59:18.856Z] Copying: 809/1024 [MB] (24 MBps) [2024-12-10T21:59:19.793Z] Copying: 832/1024 [MB] (23 MBps) [2024-12-10T21:59:20.730Z] Copying: 856/1024 [MB] (23 MBps) [2024-12-10T21:59:21.668Z] Copying: 880/1024 [MB] (23 MBps) [2024-12-10T21:59:23.045Z] Copying: 903/1024 [MB] (23 MBps) [2024-12-10T21:59:23.980Z] Copying: 927/1024 [MB] (24 MBps) [2024-12-10T21:59:24.917Z] Copying: 951/1024 [MB] (23 MBps) [2024-12-10T21:59:25.852Z] Copying: 975/1024 [MB] (23 MBps) [2024-12-10T21:59:26.789Z] Copying: 998/1024 [MB] (23 MBps) [2024-12-10T21:59:26.789Z] Copying: 1022/1024 [MB] (23 MBps) [2024-12-10T21:59:27.048Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-10 21:59:26.895894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.317 [2024-12-10 21:59:26.895973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:19.317 [2024-12-10 21:59:26.896002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:19.317 [2024-12-10 21:59:26.896028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.317 [2024-12-10 21:59:26.896097] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:19.317 [2024-12-10 21:59:26.902105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.317 [2024-12-10 21:59:26.902147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:19.317 [2024-12-10 21:59:26.902161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.986 ms 00:28:19.317 [2024-12-10 21:59:26.902173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.317 [2024-12-10 21:59:26.902406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.317 [2024-12-10 21:59:26.902421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:19.317 [2024-12-10 21:59:26.902444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:28:19.317 [2024-12-10 21:59:26.902464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.317 [2024-12-10 21:59:26.907297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.317 [2024-12-10 21:59:26.907337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:19.317 [2024-12-10 21:59:26.907352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.821 ms 00:28:19.317 [2024-12-10 21:59:26.907365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.317 [2024-12-10 21:59:26.912156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.317 [2024-12-10 21:59:26.912186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:19.317 [2024-12-10 21:59:26.912198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.758 ms 00:28:19.317 [2024-12-10 21:59:26.912215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.317 [2024-12-10 21:59:26.947608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.317 [2024-12-10 21:59:26.947641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:19.317 [2024-12-10 21:59:26.947655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.412 ms 00:28:19.317 [2024-12-10 21:59:26.947665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.317 [2024-12-10 21:59:26.969269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.317 [2024-12-10 21:59:26.969329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:19.317 [2024-12-10 21:59:26.969349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.596 ms 00:28:19.317 [2024-12-10 21:59:26.969363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.578 [2024-12-10 21:59:27.117661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.578 [2024-12-10 21:59:27.117733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:19.578 [2024-12-10 21:59:27.117753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 148.473 ms 00:28:19.578 [2024-12-10 21:59:27.117768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.578 [2024-12-10 21:59:27.153134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.578 [2024-12-10 21:59:27.153179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:19.578 [2024-12-10 21:59:27.153212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.401 ms 00:28:19.578 [2024-12-10 21:59:27.153225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.578 [2024-12-10 21:59:27.187239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.578 [2024-12-10 21:59:27.187283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:19.578 [2024-12-10 21:59:27.187315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.025 ms 00:28:19.578 [2024-12-10 21:59:27.187327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.578 [2024-12-10 21:59:27.220872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.578 [2024-12-10 21:59:27.220913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:19.578 [2024-12-10 21:59:27.220928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.557 ms 00:28:19.578 [2024-12-10 21:59:27.220939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.578 [2024-12-10 21:59:27.254385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.578 [2024-12-10 21:59:27.254430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:19.578 [2024-12-10 21:59:27.254446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.399 ms 00:28:19.578 [2024-12-10 21:59:27.254472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.578 [2024-12-10 21:59:27.254514] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:19.578 [2024-12-10 21:59:27.254534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:28:19.578 [2024-12-10 21:59:27.254548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.254562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.254575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.254588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.254600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.254612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.254625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.254637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.254650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.254662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.254674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.254686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.254698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.254710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.254722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.254733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.254745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.254757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.254769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.254781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.254809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.254821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.254834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.254846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.254859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.254870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.254882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.254894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.254906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.254920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.254933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.254945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.254957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.254969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.254981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.254993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.255005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.255017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.255029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.255042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.255054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.255078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.255091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.255104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.255117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.255129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.255142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.255154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.255167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.255180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.255192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.255204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.255217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.255229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.255241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.255253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.255266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.255278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.255290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.255303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.255315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:19.578 [2024-12-10 21:59:27.255329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:19.579 [2024-12-10 21:59:27.255342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:19.579 [2024-12-10 21:59:27.255354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:19.579 [2024-12-10 21:59:27.255366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:19.579 [2024-12-10 21:59:27.255378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:19.579 [2024-12-10 21:59:27.255390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:19.579 [2024-12-10 21:59:27.255402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:19.579 [2024-12-10 21:59:27.255414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:19.579 [2024-12-10 21:59:27.255426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:19.579 [2024-12-10 21:59:27.255439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:19.579 [2024-12-10 21:59:27.255451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:19.579 [2024-12-10 21:59:27.255463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:19.579 [2024-12-10 21:59:27.255475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:19.579 [2024-12-10 21:59:27.255488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:19.579 [2024-12-10 21:59:27.255500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:19.579 [2024-12-10 21:59:27.255512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:19.579 [2024-12-10 21:59:27.255524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:19.579 [2024-12-10 21:59:27.255536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:19.579 [2024-12-10 21:59:27.255548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:19.579 [2024-12-10 21:59:27.255561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:19.579 [2024-12-10 21:59:27.255573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:19.579 [2024-12-10 21:59:27.255585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:19.579 [2024-12-10 21:59:27.255597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:19.579 [2024-12-10 21:59:27.255609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:19.579 [2024-12-10 21:59:27.255621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:19.579 [2024-12-10 21:59:27.255633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:19.579 [2024-12-10 21:59:27.255645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:19.579 [2024-12-10 21:59:27.255657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:19.579 [2024-12-10 21:59:27.255669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:19.579 [2024-12-10 21:59:27.255681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:19.579 [2024-12-10 21:59:27.255694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:19.579 [2024-12-10 21:59:27.255706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:19.579 [2024-12-10 21:59:27.255721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:19.579 [2024-12-10 21:59:27.255733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:19.579 [2024-12-10 21:59:27.255746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:19.579 [2024-12-10 21:59:27.255758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:19.579 [2024-12-10 21:59:27.255770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:19.579 [2024-12-10 21:59:27.255782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:19.579 [2024-12-10 21:59:27.255803] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:19.579 [2024-12-10 21:59:27.255814] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 94ce3722-1699-4a2c-9d85-909c122158e6 00:28:19.579 [2024-12-10 21:59:27.255828] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:28:19.579 [2024-12-10 21:59:27.255840] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 19648 00:28:19.579 [2024-12-10 21:59:27.255851] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 18688 00:28:19.579 [2024-12-10 21:59:27.255864] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0514 00:28:19.579 [2024-12-10 21:59:27.255883] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:19.579 [2024-12-10 21:59:27.255909] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:19.579 [2024-12-10 21:59:27.255921] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:19.579 [2024-12-10 21:59:27.255932] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:19.579 [2024-12-10 21:59:27.255943] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:19.579 [2024-12-10 21:59:27.255955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.579 [2024-12-10 21:59:27.255967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:19.579 [2024-12-10 21:59:27.255980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.443 ms 00:28:19.579 [2024-12-10 21:59:27.255991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.579 [2024-12-10 21:59:27.274839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.579 [2024-12-10 21:59:27.274876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:19.579 [2024-12-10 21:59:27.274916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.839 ms 00:28:19.579 [2024-12-10 21:59:27.274928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.579 [2024-12-10 21:59:27.275513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.579 [2024-12-10 21:59:27.275534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:19.579 [2024-12-10 21:59:27.275548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:28:19.579 [2024-12-10 21:59:27.275560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.839 [2024-12-10 21:59:27.325141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.839 [2024-12-10 21:59:27.325188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:19.839 [2024-12-10 21:59:27.325221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.839 [2024-12-10 21:59:27.325234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.839 [2024-12-10 21:59:27.325291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.839 [2024-12-10 21:59:27.325304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:19.839 [2024-12-10 21:59:27.325319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.839 [2024-12-10 21:59:27.325330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.839 [2024-12-10 21:59:27.325424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.839 [2024-12-10 21:59:27.325440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:19.839 [2024-12-10 21:59:27.325458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.839 [2024-12-10 21:59:27.325470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.839 [2024-12-10 21:59:27.325491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.839 [2024-12-10 21:59:27.325504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:19.839 [2024-12-10 21:59:27.325533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.839 [2024-12-10 21:59:27.325545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.839 [2024-12-10 21:59:27.443539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.839 [2024-12-10 21:59:27.443602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:19.839 [2024-12-10 21:59:27.443619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.839 [2024-12-10 21:59:27.443648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.839 [2024-12-10 21:59:27.540611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.839 [2024-12-10 21:59:27.540664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:19.839 [2024-12-10 21:59:27.540681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.839 [2024-12-10 21:59:27.540709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.839 [2024-12-10 21:59:27.540804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.839 [2024-12-10 21:59:27.540818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:19.839 [2024-12-10 21:59:27.540831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.839 [2024-12-10 21:59:27.540851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.839 [2024-12-10 21:59:27.540894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.839 [2024-12-10 21:59:27.540908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:19.839 [2024-12-10 21:59:27.540921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.839 [2024-12-10 21:59:27.540932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.839 [2024-12-10 21:59:27.541100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.839 [2024-12-10 21:59:27.541117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:19.839 [2024-12-10 21:59:27.541131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.839 [2024-12-10 21:59:27.541143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.839 [2024-12-10 21:59:27.541198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.839 [2024-12-10 21:59:27.541212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:19.839 [2024-12-10 21:59:27.541225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.839 [2024-12-10 21:59:27.541237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.839 [2024-12-10 21:59:27.541280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.839 [2024-12-10 21:59:27.541294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:19.839 [2024-12-10 21:59:27.541306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.839 [2024-12-10 21:59:27.541317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.839 [2024-12-10 21:59:27.541374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.839 [2024-12-10 21:59:27.541388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:19.839 [2024-12-10 21:59:27.541401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.839 [2024-12-10 21:59:27.541412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.839 [2024-12-10 21:59:27.541555] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 646.692 ms, result 0 00:28:21.219 00:28:21.219 00:28:21.219 21:59:28 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:22.598 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:22.598 21:59:30 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:28:22.598 21:59:30 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:28:22.598 21:59:30 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:22.858 21:59:30 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:22.858 21:59:30 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:22.858 Process with pid 80469 is not found 00:28:22.858 Remove shared memory files 00:28:22.858 21:59:30 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 80469 00:28:22.858 21:59:30 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 80469 ']' 00:28:22.858 21:59:30 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 80469 00:28:22.858 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80469) - No such process 00:28:22.858 21:59:30 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 80469 is not found' 00:28:22.858 21:59:30 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:28:22.858 21:59:30 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:22.858 21:59:30 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:28:22.858 21:59:30 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:28:22.858 21:59:30 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:28:22.858 21:59:30 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:22.858 21:59:30 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:28:22.858 00:28:22.858 real 3m33.942s 00:28:22.858 user 3m20.931s 00:28:22.858 sys 0m14.328s 00:28:22.858 21:59:30 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:22.858 21:59:30 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:28:22.858 ************************************ 00:28:22.858 END TEST ftl_restore 00:28:22.858 ************************************ 00:28:22.858 21:59:30 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:28:22.858 21:59:30 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:28:22.858 21:59:30 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:22.858 21:59:30 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:22.858 ************************************ 00:28:22.858 START TEST ftl_dirty_shutdown 00:28:22.858 ************************************ 00:28:22.858 21:59:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:28:23.118 * Looking for test storage... 00:28:23.118 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:23.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:23.118 --rc genhtml_branch_coverage=1 00:28:23.118 --rc genhtml_function_coverage=1 00:28:23.118 --rc genhtml_legend=1 00:28:23.118 --rc geninfo_all_blocks=1 00:28:23.118 --rc geninfo_unexecuted_blocks=1 00:28:23.118 00:28:23.118 ' 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:23.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:23.118 --rc genhtml_branch_coverage=1 00:28:23.118 --rc genhtml_function_coverage=1 00:28:23.118 --rc genhtml_legend=1 00:28:23.118 --rc geninfo_all_blocks=1 00:28:23.118 --rc geninfo_unexecuted_blocks=1 00:28:23.118 00:28:23.118 ' 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:23.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:23.118 --rc genhtml_branch_coverage=1 00:28:23.118 --rc genhtml_function_coverage=1 00:28:23.118 --rc genhtml_legend=1 00:28:23.118 --rc geninfo_all_blocks=1 00:28:23.118 --rc geninfo_unexecuted_blocks=1 00:28:23.118 00:28:23.118 ' 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:23.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:23.118 --rc genhtml_branch_coverage=1 00:28:23.118 --rc genhtml_function_coverage=1 00:28:23.118 --rc genhtml_legend=1 00:28:23.118 --rc geninfo_all_blocks=1 00:28:23.118 --rc geninfo_unexecuted_blocks=1 00:28:23.118 00:28:23.118 ' 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:23.118 21:59:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:23.119 21:59:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:23.119 21:59:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:23.119 21:59:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:23.119 21:59:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:23.119 21:59:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:28:23.119 21:59:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:28:23.119 21:59:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:28:23.119 21:59:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:28:23.119 21:59:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:28:23.119 21:59:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:28:23.119 21:59:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:28:23.119 21:59:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:28:23.119 21:59:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:28:23.119 21:59:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:28:23.119 21:59:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:28:23.119 21:59:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=82732 00:28:23.119 21:59:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:28:23.119 21:59:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 82732 00:28:23.119 21:59:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 82732 ']' 00:28:23.119 21:59:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:23.119 21:59:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:23.119 21:59:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:23.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:23.119 21:59:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:23.119 21:59:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:23.378 [2024-12-10 21:59:30.906420] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:28:23.378 [2024-12-10 21:59:30.906792] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82732 ] 00:28:23.378 [2024-12-10 21:59:31.087622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.637 [2024-12-10 21:59:31.204516] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.575 21:59:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:24.575 21:59:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:24.575 21:59:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:24.575 21:59:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:28:24.575 21:59:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:24.575 21:59:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:28:24.575 21:59:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:28:24.575 21:59:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:24.834 21:59:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:24.834 21:59:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:28:24.834 21:59:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:24.834 21:59:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:28:24.834 21:59:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:24.834 21:59:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:24.834 21:59:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:24.834 21:59:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:25.094 21:59:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:25.094 { 00:28:25.094 "name": "nvme0n1", 00:28:25.094 "aliases": [ 00:28:25.094 "bfb098a2-8c2b-4b21-a25e-d305c83bc153" 00:28:25.094 ], 00:28:25.094 "product_name": "NVMe disk", 00:28:25.094 "block_size": 4096, 00:28:25.094 "num_blocks": 1310720, 00:28:25.094 "uuid": "bfb098a2-8c2b-4b21-a25e-d305c83bc153", 00:28:25.094 "numa_id": -1, 00:28:25.094 "assigned_rate_limits": { 00:28:25.094 "rw_ios_per_sec": 0, 00:28:25.094 "rw_mbytes_per_sec": 0, 00:28:25.094 "r_mbytes_per_sec": 0, 00:28:25.094 "w_mbytes_per_sec": 0 00:28:25.094 }, 00:28:25.094 "claimed": true, 00:28:25.094 "claim_type": "read_many_write_one", 00:28:25.094 "zoned": false, 00:28:25.094 "supported_io_types": { 00:28:25.094 "read": true, 00:28:25.094 "write": true, 00:28:25.094 "unmap": true, 00:28:25.094 "flush": true, 00:28:25.094 "reset": true, 00:28:25.094 "nvme_admin": true, 00:28:25.094 "nvme_io": true, 00:28:25.094 "nvme_io_md": false, 00:28:25.094 "write_zeroes": true, 00:28:25.094 "zcopy": false, 00:28:25.094 "get_zone_info": false, 00:28:25.094 "zone_management": false, 00:28:25.094 "zone_append": false, 00:28:25.094 "compare": true, 00:28:25.094 "compare_and_write": false, 00:28:25.094 "abort": true, 00:28:25.094 "seek_hole": false, 00:28:25.094 "seek_data": false, 00:28:25.094 "copy": true, 00:28:25.094 "nvme_iov_md": false 00:28:25.094 }, 00:28:25.094 "driver_specific": { 00:28:25.094 "nvme": [ 00:28:25.094 { 00:28:25.094 "pci_address": "0000:00:11.0", 00:28:25.094 "trid": { 00:28:25.094 "trtype": "PCIe", 00:28:25.094 "traddr": "0000:00:11.0" 00:28:25.094 }, 00:28:25.094 "ctrlr_data": { 00:28:25.094 "cntlid": 0, 00:28:25.094 "vendor_id": "0x1b36", 00:28:25.094 "model_number": "QEMU NVMe Ctrl", 00:28:25.094 "serial_number": "12341", 00:28:25.094 "firmware_revision": "8.0.0", 00:28:25.094 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:25.094 "oacs": { 00:28:25.094 "security": 0, 00:28:25.094 "format": 1, 00:28:25.094 "firmware": 0, 00:28:25.094 "ns_manage": 1 00:28:25.094 }, 00:28:25.094 "multi_ctrlr": false, 00:28:25.094 "ana_reporting": false 00:28:25.094 }, 00:28:25.094 "vs": { 00:28:25.094 "nvme_version": "1.4" 00:28:25.094 }, 00:28:25.094 "ns_data": { 00:28:25.094 "id": 1, 00:28:25.094 "can_share": false 00:28:25.094 } 00:28:25.094 } 00:28:25.094 ], 00:28:25.094 "mp_policy": "active_passive" 00:28:25.094 } 00:28:25.094 } 00:28:25.094 ]' 00:28:25.094 21:59:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:25.094 21:59:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:25.094 21:59:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:25.094 21:59:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:28:25.094 21:59:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:28:25.094 21:59:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:28:25.094 21:59:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:28:25.094 21:59:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:25.094 21:59:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:28:25.094 21:59:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:25.094 21:59:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:25.353 21:59:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=032a69f6-c166-44ec-ac35-28deb46342de 00:28:25.353 21:59:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:28:25.353 21:59:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 032a69f6-c166-44ec-ac35-28deb46342de 00:28:25.613 21:59:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:25.613 21:59:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=701344bb-14bd-4b7d-bb0c-da70ae979c46 00:28:25.613 21:59:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 701344bb-14bd-4b7d-bb0c-da70ae979c46 00:28:25.872 21:59:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=1d5cb9f0-9449-41c7-9c3d-5a43c0e21867 00:28:25.872 21:59:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:28:25.872 21:59:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 1d5cb9f0-9449-41c7-9c3d-5a43c0e21867 00:28:25.872 21:59:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:28:25.872 21:59:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:25.872 21:59:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=1d5cb9f0-9449-41c7-9c3d-5a43c0e21867 00:28:25.872 21:59:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:28:25.872 21:59:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 1d5cb9f0-9449-41c7-9c3d-5a43c0e21867 00:28:25.872 21:59:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=1d5cb9f0-9449-41c7-9c3d-5a43c0e21867 00:28:25.872 21:59:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:25.872 21:59:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:25.872 21:59:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:25.872 21:59:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1d5cb9f0-9449-41c7-9c3d-5a43c0e21867 00:28:26.131 21:59:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:26.131 { 00:28:26.131 "name": "1d5cb9f0-9449-41c7-9c3d-5a43c0e21867", 00:28:26.131 "aliases": [ 00:28:26.131 "lvs/nvme0n1p0" 00:28:26.131 ], 00:28:26.131 "product_name": "Logical Volume", 00:28:26.131 "block_size": 4096, 00:28:26.131 "num_blocks": 26476544, 00:28:26.131 "uuid": "1d5cb9f0-9449-41c7-9c3d-5a43c0e21867", 00:28:26.131 "assigned_rate_limits": { 00:28:26.131 "rw_ios_per_sec": 0, 00:28:26.131 "rw_mbytes_per_sec": 0, 00:28:26.131 "r_mbytes_per_sec": 0, 00:28:26.131 "w_mbytes_per_sec": 0 00:28:26.131 }, 00:28:26.132 "claimed": false, 00:28:26.132 "zoned": false, 00:28:26.132 "supported_io_types": { 00:28:26.132 "read": true, 00:28:26.132 "write": true, 00:28:26.132 "unmap": true, 00:28:26.132 "flush": false, 00:28:26.132 "reset": true, 00:28:26.132 "nvme_admin": false, 00:28:26.132 "nvme_io": false, 00:28:26.132 "nvme_io_md": false, 00:28:26.132 "write_zeroes": true, 00:28:26.132 "zcopy": false, 00:28:26.132 "get_zone_info": false, 00:28:26.132 "zone_management": false, 00:28:26.132 "zone_append": false, 00:28:26.132 "compare": false, 00:28:26.132 "compare_and_write": false, 00:28:26.132 "abort": false, 00:28:26.132 "seek_hole": true, 00:28:26.132 "seek_data": true, 00:28:26.132 "copy": false, 00:28:26.132 "nvme_iov_md": false 00:28:26.132 }, 00:28:26.132 "driver_specific": { 00:28:26.132 "lvol": { 00:28:26.132 "lvol_store_uuid": "701344bb-14bd-4b7d-bb0c-da70ae979c46", 00:28:26.132 "base_bdev": "nvme0n1", 00:28:26.132 "thin_provision": true, 00:28:26.132 "num_allocated_clusters": 0, 00:28:26.132 "snapshot": false, 00:28:26.132 "clone": false, 00:28:26.132 "esnap_clone": false 00:28:26.132 } 00:28:26.132 } 00:28:26.132 } 00:28:26.132 ]' 00:28:26.132 21:59:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:26.132 21:59:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:26.132 21:59:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:26.132 21:59:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:26.132 21:59:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:26.132 21:59:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:28:26.132 21:59:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:28:26.132 21:59:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:28:26.132 21:59:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:26.391 21:59:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:26.391 21:59:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:26.391 21:59:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 1d5cb9f0-9449-41c7-9c3d-5a43c0e21867 00:28:26.391 21:59:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=1d5cb9f0-9449-41c7-9c3d-5a43c0e21867 00:28:26.391 21:59:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:26.391 21:59:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:26.391 21:59:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:26.391 21:59:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1d5cb9f0-9449-41c7-9c3d-5a43c0e21867 00:28:26.651 21:59:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:26.651 { 00:28:26.651 "name": "1d5cb9f0-9449-41c7-9c3d-5a43c0e21867", 00:28:26.651 "aliases": [ 00:28:26.651 "lvs/nvme0n1p0" 00:28:26.651 ], 00:28:26.651 "product_name": "Logical Volume", 00:28:26.651 "block_size": 4096, 00:28:26.651 "num_blocks": 26476544, 00:28:26.651 "uuid": "1d5cb9f0-9449-41c7-9c3d-5a43c0e21867", 00:28:26.651 "assigned_rate_limits": { 00:28:26.651 "rw_ios_per_sec": 0, 00:28:26.651 "rw_mbytes_per_sec": 0, 00:28:26.651 "r_mbytes_per_sec": 0, 00:28:26.651 "w_mbytes_per_sec": 0 00:28:26.651 }, 00:28:26.651 "claimed": false, 00:28:26.651 "zoned": false, 00:28:26.651 "supported_io_types": { 00:28:26.651 "read": true, 00:28:26.651 "write": true, 00:28:26.651 "unmap": true, 00:28:26.651 "flush": false, 00:28:26.651 "reset": true, 00:28:26.651 "nvme_admin": false, 00:28:26.651 "nvme_io": false, 00:28:26.651 "nvme_io_md": false, 00:28:26.651 "write_zeroes": true, 00:28:26.651 "zcopy": false, 00:28:26.651 "get_zone_info": false, 00:28:26.651 "zone_management": false, 00:28:26.651 "zone_append": false, 00:28:26.651 "compare": false, 00:28:26.651 "compare_and_write": false, 00:28:26.651 "abort": false, 00:28:26.651 "seek_hole": true, 00:28:26.651 "seek_data": true, 00:28:26.651 "copy": false, 00:28:26.651 "nvme_iov_md": false 00:28:26.651 }, 00:28:26.651 "driver_specific": { 00:28:26.651 "lvol": { 00:28:26.651 "lvol_store_uuid": "701344bb-14bd-4b7d-bb0c-da70ae979c46", 00:28:26.651 "base_bdev": "nvme0n1", 00:28:26.651 "thin_provision": true, 00:28:26.651 "num_allocated_clusters": 0, 00:28:26.651 "snapshot": false, 00:28:26.651 "clone": false, 00:28:26.651 "esnap_clone": false 00:28:26.651 } 00:28:26.651 } 00:28:26.651 } 00:28:26.651 ]' 00:28:26.651 21:59:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:26.651 21:59:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:26.651 21:59:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:26.911 21:59:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:26.911 21:59:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:26.911 21:59:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:28:26.911 21:59:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:28:26.911 21:59:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:26.911 21:59:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:28:26.911 21:59:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 1d5cb9f0-9449-41c7-9c3d-5a43c0e21867 00:28:26.911 21:59:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=1d5cb9f0-9449-41c7-9c3d-5a43c0e21867 00:28:26.911 21:59:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:26.911 21:59:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:26.911 21:59:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:26.911 21:59:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1d5cb9f0-9449-41c7-9c3d-5a43c0e21867 00:28:27.171 21:59:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:27.171 { 00:28:27.171 "name": "1d5cb9f0-9449-41c7-9c3d-5a43c0e21867", 00:28:27.171 "aliases": [ 00:28:27.171 "lvs/nvme0n1p0" 00:28:27.171 ], 00:28:27.171 "product_name": "Logical Volume", 00:28:27.171 "block_size": 4096, 00:28:27.171 "num_blocks": 26476544, 00:28:27.171 "uuid": "1d5cb9f0-9449-41c7-9c3d-5a43c0e21867", 00:28:27.171 "assigned_rate_limits": { 00:28:27.171 "rw_ios_per_sec": 0, 00:28:27.171 "rw_mbytes_per_sec": 0, 00:28:27.171 "r_mbytes_per_sec": 0, 00:28:27.171 "w_mbytes_per_sec": 0 00:28:27.171 }, 00:28:27.171 "claimed": false, 00:28:27.171 "zoned": false, 00:28:27.171 "supported_io_types": { 00:28:27.171 "read": true, 00:28:27.171 "write": true, 00:28:27.171 "unmap": true, 00:28:27.171 "flush": false, 00:28:27.171 "reset": true, 00:28:27.171 "nvme_admin": false, 00:28:27.171 "nvme_io": false, 00:28:27.171 "nvme_io_md": false, 00:28:27.171 "write_zeroes": true, 00:28:27.171 "zcopy": false, 00:28:27.171 "get_zone_info": false, 00:28:27.171 "zone_management": false, 00:28:27.171 "zone_append": false, 00:28:27.171 "compare": false, 00:28:27.171 "compare_and_write": false, 00:28:27.171 "abort": false, 00:28:27.171 "seek_hole": true, 00:28:27.171 "seek_data": true, 00:28:27.171 "copy": false, 00:28:27.171 "nvme_iov_md": false 00:28:27.171 }, 00:28:27.171 "driver_specific": { 00:28:27.171 "lvol": { 00:28:27.171 "lvol_store_uuid": "701344bb-14bd-4b7d-bb0c-da70ae979c46", 00:28:27.171 "base_bdev": "nvme0n1", 00:28:27.171 "thin_provision": true, 00:28:27.171 "num_allocated_clusters": 0, 00:28:27.171 "snapshot": false, 00:28:27.171 "clone": false, 00:28:27.171 "esnap_clone": false 00:28:27.171 } 00:28:27.171 } 00:28:27.171 } 00:28:27.171 ]' 00:28:27.171 21:59:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:27.171 21:59:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:27.171 21:59:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:27.432 21:59:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:27.432 21:59:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:27.432 21:59:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:28:27.432 21:59:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:28:27.432 21:59:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 1d5cb9f0-9449-41c7-9c3d-5a43c0e21867 --l2p_dram_limit 10' 00:28:27.432 21:59:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:28:27.432 21:59:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:28:27.432 21:59:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:28:27.432 21:59:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 1d5cb9f0-9449-41c7-9c3d-5a43c0e21867 --l2p_dram_limit 10 -c nvc0n1p0 00:28:27.432 [2024-12-10 21:59:35.088219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.432 [2024-12-10 21:59:35.088278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:27.432 [2024-12-10 21:59:35.088301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:27.432 [2024-12-10 21:59:35.088314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.432 [2024-12-10 21:59:35.088392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.432 [2024-12-10 21:59:35.088406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:27.432 [2024-12-10 21:59:35.088422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:28:27.432 [2024-12-10 21:59:35.088434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.432 [2024-12-10 21:59:35.088468] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:27.432 [2024-12-10 21:59:35.089518] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:27.432 [2024-12-10 21:59:35.089556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.432 [2024-12-10 21:59:35.089571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:27.432 [2024-12-10 21:59:35.089588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.099 ms 00:28:27.432 [2024-12-10 21:59:35.089601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.432 [2024-12-10 21:59:35.089654] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b02637a8-7852-4645-9f76-e532285a8360 00:28:27.432 [2024-12-10 21:59:35.092156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.432 [2024-12-10 21:59:35.092200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:27.432 [2024-12-10 21:59:35.092215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:28:27.432 [2024-12-10 21:59:35.092231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.432 [2024-12-10 21:59:35.104952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.432 [2024-12-10 21:59:35.105002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:27.432 [2024-12-10 21:59:35.105018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.630 ms 00:28:27.432 [2024-12-10 21:59:35.105033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.432 [2024-12-10 21:59:35.105157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.432 [2024-12-10 21:59:35.105176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:27.432 [2024-12-10 21:59:35.105205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:28:27.432 [2024-12-10 21:59:35.105226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.432 [2024-12-10 21:59:35.105302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.432 [2024-12-10 21:59:35.105322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:27.432 [2024-12-10 21:59:35.105335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:28:27.432 [2024-12-10 21:59:35.105355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.432 [2024-12-10 21:59:35.105385] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:27.432 [2024-12-10 21:59:35.111475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.432 [2024-12-10 21:59:35.111654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:27.432 [2024-12-10 21:59:35.111688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.105 ms 00:28:27.432 [2024-12-10 21:59:35.111701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.432 [2024-12-10 21:59:35.111752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.432 [2024-12-10 21:59:35.111765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:27.432 [2024-12-10 21:59:35.111782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:27.432 [2024-12-10 21:59:35.111794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.432 [2024-12-10 21:59:35.111840] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:27.432 [2024-12-10 21:59:35.111979] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:27.432 [2024-12-10 21:59:35.112003] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:27.432 [2024-12-10 21:59:35.112020] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:27.432 [2024-12-10 21:59:35.112039] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:27.432 [2024-12-10 21:59:35.112082] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:27.432 [2024-12-10 21:59:35.112117] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:27.432 [2024-12-10 21:59:35.112131] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:27.432 [2024-12-10 21:59:35.112153] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:27.432 [2024-12-10 21:59:35.112166] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:27.432 [2024-12-10 21:59:35.112182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.432 [2024-12-10 21:59:35.112208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:27.432 [2024-12-10 21:59:35.112225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:28:27.432 [2024-12-10 21:59:35.112238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.432 [2024-12-10 21:59:35.112322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.432 [2024-12-10 21:59:35.112336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:27.432 [2024-12-10 21:59:35.112352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:28:27.432 [2024-12-10 21:59:35.112364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.432 [2024-12-10 21:59:35.112462] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:27.432 [2024-12-10 21:59:35.112477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:27.432 [2024-12-10 21:59:35.112493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:27.432 [2024-12-10 21:59:35.112506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:27.432 [2024-12-10 21:59:35.112522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:27.432 [2024-12-10 21:59:35.112534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:27.432 [2024-12-10 21:59:35.112549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:27.432 [2024-12-10 21:59:35.112561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:27.432 [2024-12-10 21:59:35.112576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:27.432 [2024-12-10 21:59:35.112587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:27.432 [2024-12-10 21:59:35.112602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:27.432 [2024-12-10 21:59:35.112614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:27.432 [2024-12-10 21:59:35.112630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:27.432 [2024-12-10 21:59:35.112642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:27.433 [2024-12-10 21:59:35.112659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:27.433 [2024-12-10 21:59:35.112671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:27.433 [2024-12-10 21:59:35.112688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:27.433 [2024-12-10 21:59:35.112700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:27.433 [2024-12-10 21:59:35.112715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:27.433 [2024-12-10 21:59:35.112727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:27.433 [2024-12-10 21:59:35.112742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:27.433 [2024-12-10 21:59:35.112754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:27.433 [2024-12-10 21:59:35.112769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:27.433 [2024-12-10 21:59:35.112780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:27.433 [2024-12-10 21:59:35.112795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:27.433 [2024-12-10 21:59:35.112806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:27.433 [2024-12-10 21:59:35.112821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:27.433 [2024-12-10 21:59:35.112832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:27.433 [2024-12-10 21:59:35.112846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:27.433 [2024-12-10 21:59:35.112858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:27.433 [2024-12-10 21:59:35.112873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:27.433 [2024-12-10 21:59:35.112884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:27.433 [2024-12-10 21:59:35.112901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:27.433 [2024-12-10 21:59:35.112912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:27.433 [2024-12-10 21:59:35.112926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:27.433 [2024-12-10 21:59:35.112938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:27.433 [2024-12-10 21:59:35.112952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:27.433 [2024-12-10 21:59:35.112964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:27.433 [2024-12-10 21:59:35.112980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:27.433 [2024-12-10 21:59:35.112991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:27.433 [2024-12-10 21:59:35.113006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:27.433 [2024-12-10 21:59:35.113018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:27.433 [2024-12-10 21:59:35.113031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:27.433 [2024-12-10 21:59:35.113043] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:27.433 [2024-12-10 21:59:35.113059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:27.433 [2024-12-10 21:59:35.113083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:27.433 [2024-12-10 21:59:35.113102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:27.433 [2024-12-10 21:59:35.113115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:27.433 [2024-12-10 21:59:35.113133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:27.433 [2024-12-10 21:59:35.113145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:27.433 [2024-12-10 21:59:35.113159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:27.433 [2024-12-10 21:59:35.113171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:27.433 [2024-12-10 21:59:35.113186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:27.433 [2024-12-10 21:59:35.113200] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:27.433 [2024-12-10 21:59:35.113217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:27.433 [2024-12-10 21:59:35.113234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:27.433 [2024-12-10 21:59:35.113250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:27.433 [2024-12-10 21:59:35.113263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:27.433 [2024-12-10 21:59:35.113279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:27.433 [2024-12-10 21:59:35.113292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:27.433 [2024-12-10 21:59:35.113307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:27.433 [2024-12-10 21:59:35.113320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:27.433 [2024-12-10 21:59:35.113335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:27.433 [2024-12-10 21:59:35.113348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:27.433 [2024-12-10 21:59:35.113368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:27.433 [2024-12-10 21:59:35.113382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:27.433 [2024-12-10 21:59:35.113397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:27.433 [2024-12-10 21:59:35.113410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:27.433 [2024-12-10 21:59:35.113425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:27.433 [2024-12-10 21:59:35.113438] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:27.433 [2024-12-10 21:59:35.113454] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:27.433 [2024-12-10 21:59:35.113468] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:27.433 [2024-12-10 21:59:35.113484] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:27.433 [2024-12-10 21:59:35.113496] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:27.433 [2024-12-10 21:59:35.113511] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:27.433 [2024-12-10 21:59:35.113524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.433 [2024-12-10 21:59:35.113539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:27.433 [2024-12-10 21:59:35.113552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.123 ms 00:28:27.433 [2024-12-10 21:59:35.113570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.433 [2024-12-10 21:59:35.113642] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:27.433 [2024-12-10 21:59:35.113665] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:28:31.681 [2024-12-10 21:59:39.076623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.681 [2024-12-10 21:59:39.076704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:28:31.681 [2024-12-10 21:59:39.076727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3969.413 ms 00:28:31.681 [2024-12-10 21:59:39.076743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.681 [2024-12-10 21:59:39.118935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.681 [2024-12-10 21:59:39.119002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:31.681 [2024-12-10 21:59:39.119022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.951 ms 00:28:31.681 [2024-12-10 21:59:39.119038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.681 [2024-12-10 21:59:39.119208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.681 [2024-12-10 21:59:39.119230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:31.681 [2024-12-10 21:59:39.119245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:28:31.681 [2024-12-10 21:59:39.119272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.681 [2024-12-10 21:59:39.168666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.681 [2024-12-10 21:59:39.168725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:31.681 [2024-12-10 21:59:39.168742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.395 ms 00:28:31.681 [2024-12-10 21:59:39.168758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.681 [2024-12-10 21:59:39.168805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.681 [2024-12-10 21:59:39.168828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:31.681 [2024-12-10 21:59:39.168841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:31.681 [2024-12-10 21:59:39.168869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.681 [2024-12-10 21:59:39.169742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.681 [2024-12-10 21:59:39.169772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:31.681 [2024-12-10 21:59:39.169786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.805 ms 00:28:31.681 [2024-12-10 21:59:39.169801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.681 [2024-12-10 21:59:39.169909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.681 [2024-12-10 21:59:39.169927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:31.681 [2024-12-10 21:59:39.169944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:28:31.681 [2024-12-10 21:59:39.169964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.681 [2024-12-10 21:59:39.192379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.681 [2024-12-10 21:59:39.192597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:31.681 [2024-12-10 21:59:39.192625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.425 ms 00:28:31.681 [2024-12-10 21:59:39.192642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.681 [2024-12-10 21:59:39.233829] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:31.681 [2024-12-10 21:59:39.239160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.682 [2024-12-10 21:59:39.239203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:31.682 [2024-12-10 21:59:39.239229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.471 ms 00:28:31.682 [2024-12-10 21:59:39.239245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.682 [2024-12-10 21:59:39.341397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.682 [2024-12-10 21:59:39.341656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:31.682 [2024-12-10 21:59:39.341693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.258 ms 00:28:31.682 [2024-12-10 21:59:39.341707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.682 [2024-12-10 21:59:39.341912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.682 [2024-12-10 21:59:39.341933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:31.682 [2024-12-10 21:59:39.341953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:28:31.682 [2024-12-10 21:59:39.341966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.682 [2024-12-10 21:59:39.376812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.682 [2024-12-10 21:59:39.376855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:28:31.682 [2024-12-10 21:59:39.376876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.812 ms 00:28:31.682 [2024-12-10 21:59:39.376888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.954 [2024-12-10 21:59:39.410981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.954 [2024-12-10 21:59:39.411022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:28:31.954 [2024-12-10 21:59:39.411043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.093 ms 00:28:31.954 [2024-12-10 21:59:39.411077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.954 [2024-12-10 21:59:39.411820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.954 [2024-12-10 21:59:39.411980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:31.955 [2024-12-10 21:59:39.412011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.695 ms 00:28:31.955 [2024-12-10 21:59:39.412028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.955 [2024-12-10 21:59:39.516551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.955 [2024-12-10 21:59:39.516595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:28:31.955 [2024-12-10 21:59:39.516620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.606 ms 00:28:31.955 [2024-12-10 21:59:39.516633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.955 [2024-12-10 21:59:39.553355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.955 [2024-12-10 21:59:39.553397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:28:31.955 [2024-12-10 21:59:39.553417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.684 ms 00:28:31.955 [2024-12-10 21:59:39.553430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.955 [2024-12-10 21:59:39.587301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.955 [2024-12-10 21:59:39.587342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:28:31.955 [2024-12-10 21:59:39.587360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.876 ms 00:28:31.955 [2024-12-10 21:59:39.587372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.955 [2024-12-10 21:59:39.622069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.955 [2024-12-10 21:59:39.622109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:31.955 [2024-12-10 21:59:39.622128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.696 ms 00:28:31.955 [2024-12-10 21:59:39.622140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.955 [2024-12-10 21:59:39.622194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.955 [2024-12-10 21:59:39.622207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:31.955 [2024-12-10 21:59:39.622226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:31.955 [2024-12-10 21:59:39.622240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.955 [2024-12-10 21:59:39.622360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.955 [2024-12-10 21:59:39.622379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:31.955 [2024-12-10 21:59:39.622394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:28:31.955 [2024-12-10 21:59:39.622406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.955 [2024-12-10 21:59:39.623762] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4542.450 ms, result 0 00:28:31.955 { 00:28:31.955 "name": "ftl0", 00:28:31.955 "uuid": "b02637a8-7852-4645-9f76-e532285a8360" 00:28:31.955 } 00:28:31.955 21:59:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:28:31.955 21:59:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:28:32.214 21:59:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:28:32.214 21:59:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:28:32.214 21:59:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:28:32.473 /dev/nbd0 00:28:32.473 21:59:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:28:32.473 21:59:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:28:32.473 21:59:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:28:32.473 21:59:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:32.473 21:59:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:32.473 21:59:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:28:32.473 21:59:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:28:32.473 21:59:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:32.473 21:59:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:32.473 21:59:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:28:32.473 1+0 records in 00:28:32.473 1+0 records out 00:28:32.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00276885 s, 1.5 MB/s 00:28:32.473 21:59:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:28:32.473 21:59:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:28:32.473 21:59:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:28:32.473 21:59:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:32.473 21:59:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:28:32.473 21:59:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:28:32.732 [2024-12-10 21:59:40.204199] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:28:32.732 [2024-12-10 21:59:40.204519] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82880 ] 00:28:32.732 [2024-12-10 21:59:40.396877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.991 [2024-12-10 21:59:40.513228] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:34.370  [2024-12-10T21:59:43.039Z] Copying: 210/1024 [MB] (210 MBps) [2024-12-10T21:59:43.978Z] Copying: 421/1024 [MB] (210 MBps) [2024-12-10T21:59:44.914Z] Copying: 635/1024 [MB] (213 MBps) [2024-12-10T21:59:45.851Z] Copying: 843/1024 [MB] (207 MBps) [2024-12-10T21:59:47.231Z] Copying: 1024/1024 [MB] (average 209 MBps) 00:28:39.500 00:28:39.500 21:59:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:41.406 21:59:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:28:41.406 [2024-12-10 21:59:48.721564] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:28:41.406 [2024-12-10 21:59:48.721681] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82972 ] 00:28:41.406 [2024-12-10 21:59:48.904195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.406 [2024-12-10 21:59:49.027661] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:42.784  [2024-12-10T21:59:51.452Z] Copying: 15/1024 [MB] (15 MBps) [2024-12-10T21:59:52.831Z] Copying: 30/1024 [MB] (15 MBps) [2024-12-10T21:59:53.400Z] Copying: 44/1024 [MB] (14 MBps) [2024-12-10T21:59:54.779Z] Copying: 59/1024 [MB] (14 MBps) [2024-12-10T21:59:55.715Z] Copying: 74/1024 [MB] (15 MBps) [2024-12-10T21:59:56.652Z] Copying: 89/1024 [MB] (15 MBps) [2024-12-10T21:59:57.589Z] Copying: 105/1024 [MB] (15 MBps) [2024-12-10T21:59:58.525Z] Copying: 120/1024 [MB] (15 MBps) [2024-12-10T21:59:59.463Z] Copying: 136/1024 [MB] (15 MBps) [2024-12-10T22:00:00.400Z] Copying: 151/1024 [MB] (15 MBps) [2024-12-10T22:00:01.780Z] Copying: 167/1024 [MB] (15 MBps) [2024-12-10T22:00:02.753Z] Copying: 182/1024 [MB] (14 MBps) [2024-12-10T22:00:03.708Z] Copying: 196/1024 [MB] (14 MBps) [2024-12-10T22:00:04.646Z] Copying: 212/1024 [MB] (15 MBps) [2024-12-10T22:00:05.585Z] Copying: 228/1024 [MB] (15 MBps) [2024-12-10T22:00:06.522Z] Copying: 244/1024 [MB] (15 MBps) [2024-12-10T22:00:07.459Z] Copying: 259/1024 [MB] (15 MBps) [2024-12-10T22:00:08.395Z] Copying: 275/1024 [MB] (15 MBps) [2024-12-10T22:00:09.772Z] Copying: 290/1024 [MB] (15 MBps) [2024-12-10T22:00:10.709Z] Copying: 305/1024 [MB] (15 MBps) [2024-12-10T22:00:11.644Z] Copying: 320/1024 [MB] (14 MBps) [2024-12-10T22:00:12.579Z] Copying: 335/1024 [MB] (15 MBps) [2024-12-10T22:00:13.516Z] Copying: 350/1024 [MB] (14 MBps) [2024-12-10T22:00:14.457Z] Copying: 365/1024 [MB] (14 MBps) [2024-12-10T22:00:15.392Z] Copying: 379/1024 [MB] (14 MBps) [2024-12-10T22:00:16.769Z] Copying: 394/1024 [MB] (14 MBps) [2024-12-10T22:00:17.706Z] Copying: 409/1024 [MB] (14 MBps) [2024-12-10T22:00:18.643Z] Copying: 423/1024 [MB] (14 MBps) [2024-12-10T22:00:19.579Z] Copying: 438/1024 [MB] (14 MBps) [2024-12-10T22:00:20.516Z] Copying: 453/1024 [MB] (15 MBps) [2024-12-10T22:00:21.452Z] Copying: 468/1024 [MB] (15 MBps) [2024-12-10T22:00:22.389Z] Copying: 483/1024 [MB] (14 MBps) [2024-12-10T22:00:23.767Z] Copying: 497/1024 [MB] (14 MBps) [2024-12-10T22:00:24.704Z] Copying: 512/1024 [MB] (14 MBps) [2024-12-10T22:00:25.685Z] Copying: 526/1024 [MB] (14 MBps) [2024-12-10T22:00:26.633Z] Copying: 541/1024 [MB] (14 MBps) [2024-12-10T22:00:27.569Z] Copying: 555/1024 [MB] (14 MBps) [2024-12-10T22:00:28.505Z] Copying: 569/1024 [MB] (14 MBps) [2024-12-10T22:00:29.443Z] Copying: 584/1024 [MB] (14 MBps) [2024-12-10T22:00:30.378Z] Copying: 599/1024 [MB] (14 MBps) [2024-12-10T22:00:31.753Z] Copying: 614/1024 [MB] (14 MBps) [2024-12-10T22:00:32.689Z] Copying: 629/1024 [MB] (15 MBps) [2024-12-10T22:00:33.626Z] Copying: 643/1024 [MB] (14 MBps) [2024-12-10T22:00:34.561Z] Copying: 658/1024 [MB] (14 MBps) [2024-12-10T22:00:35.496Z] Copying: 674/1024 [MB] (15 MBps) [2024-12-10T22:00:36.431Z] Copying: 689/1024 [MB] (15 MBps) [2024-12-10T22:00:37.367Z] Copying: 703/1024 [MB] (14 MBps) [2024-12-10T22:00:38.744Z] Copying: 718/1024 [MB] (14 MBps) [2024-12-10T22:00:39.678Z] Copying: 733/1024 [MB] (14 MBps) [2024-12-10T22:00:40.614Z] Copying: 747/1024 [MB] (14 MBps) [2024-12-10T22:00:41.552Z] Copying: 762/1024 [MB] (14 MBps) [2024-12-10T22:00:42.490Z] Copying: 778/1024 [MB] (15 MBps) [2024-12-10T22:00:43.432Z] Copying: 793/1024 [MB] (15 MBps) [2024-12-10T22:00:44.370Z] Copying: 809/1024 [MB] (15 MBps) [2024-12-10T22:00:45.744Z] Copying: 825/1024 [MB] (15 MBps) [2024-12-10T22:00:46.312Z] Copying: 841/1024 [MB] (15 MBps) [2024-12-10T22:00:47.688Z] Copying: 856/1024 [MB] (15 MBps) [2024-12-10T22:00:48.659Z] Copying: 871/1024 [MB] (15 MBps) [2024-12-10T22:00:49.596Z] Copying: 886/1024 [MB] (15 MBps) [2024-12-10T22:00:50.533Z] Copying: 901/1024 [MB] (14 MBps) [2024-12-10T22:00:51.470Z] Copying: 916/1024 [MB] (14 MBps) [2024-12-10T22:00:52.406Z] Copying: 930/1024 [MB] (14 MBps) [2024-12-10T22:00:53.343Z] Copying: 945/1024 [MB] (14 MBps) [2024-12-10T22:00:54.722Z] Copying: 960/1024 [MB] (15 MBps) [2024-12-10T22:00:55.657Z] Copying: 976/1024 [MB] (15 MBps) [2024-12-10T22:00:56.594Z] Copying: 991/1024 [MB] (15 MBps) [2024-12-10T22:00:57.532Z] Copying: 1006/1024 [MB] (15 MBps) [2024-12-10T22:00:57.532Z] Copying: 1021/1024 [MB] (14 MBps) [2024-12-10T22:00:58.911Z] Copying: 1024/1024 [MB] (average 15 MBps) 00:29:51.180 00:29:51.180 22:00:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:29:51.180 22:00:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:29:51.180 22:00:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:29:51.441 [2024-12-10 22:00:59.007636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.441 [2024-12-10 22:00:59.007691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:51.441 [2024-12-10 22:00:59.007707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:51.441 [2024-12-10 22:00:59.007720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.441 [2024-12-10 22:00:59.007748] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:51.441 [2024-12-10 22:00:59.011712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.441 [2024-12-10 22:00:59.011749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:51.441 [2024-12-10 22:00:59.011764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.945 ms 00:29:51.441 [2024-12-10 22:00:59.011776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.441 [2024-12-10 22:00:59.014769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.441 [2024-12-10 22:00:59.014812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:51.441 [2024-12-10 22:00:59.014829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.960 ms 00:29:51.441 [2024-12-10 22:00:59.014840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.441 [2024-12-10 22:00:59.032859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.441 [2024-12-10 22:00:59.032908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:51.441 [2024-12-10 22:00:59.032927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.017 ms 00:29:51.441 [2024-12-10 22:00:59.032937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.441 [2024-12-10 22:00:59.037662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.441 [2024-12-10 22:00:59.037699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:51.441 [2024-12-10 22:00:59.037714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.686 ms 00:29:51.441 [2024-12-10 22:00:59.037724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.441 [2024-12-10 22:00:59.073740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.441 [2024-12-10 22:00:59.073778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:51.441 [2024-12-10 22:00:59.073796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.991 ms 00:29:51.441 [2024-12-10 22:00:59.073806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.441 [2024-12-10 22:00:59.099473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.441 [2024-12-10 22:00:59.099513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:51.441 [2024-12-10 22:00:59.099534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.661 ms 00:29:51.441 [2024-12-10 22:00:59.099544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.441 [2024-12-10 22:00:59.099701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.441 [2024-12-10 22:00:59.099716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:51.441 [2024-12-10 22:00:59.099729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:29:51.441 [2024-12-10 22:00:59.099739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.441 [2024-12-10 22:00:59.135597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.441 [2024-12-10 22:00:59.135810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:51.441 [2024-12-10 22:00:59.135839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.893 ms 00:29:51.441 [2024-12-10 22:00:59.135850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.705 [2024-12-10 22:00:59.170409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.705 [2024-12-10 22:00:59.170458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:51.705 [2024-12-10 22:00:59.170498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.568 ms 00:29:51.705 [2024-12-10 22:00:59.170516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.705 [2024-12-10 22:00:59.204534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.705 [2024-12-10 22:00:59.204571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:51.705 [2024-12-10 22:00:59.204586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.009 ms 00:29:51.705 [2024-12-10 22:00:59.204597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.705 [2024-12-10 22:00:59.238206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.705 [2024-12-10 22:00:59.238251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:51.705 [2024-12-10 22:00:59.238270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.569 ms 00:29:51.705 [2024-12-10 22:00:59.238279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.705 [2024-12-10 22:00:59.238321] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:51.705 [2024-12-10 22:00:59.238338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.238998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.239009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.239022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.239033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.239047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.239058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.239089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.239101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.239114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.239125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.239139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.239150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.239164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.239175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.239189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.239200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.239213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.239224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.239237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.239248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.239261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:51.705 [2024-12-10 22:00:59.239272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:51.706 [2024-12-10 22:00:59.239288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:51.706 [2024-12-10 22:00:59.239299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:51.706 [2024-12-10 22:00:59.239322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:51.706 [2024-12-10 22:00:59.239333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:51.706 [2024-12-10 22:00:59.239347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:51.706 [2024-12-10 22:00:59.239361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:51.706 [2024-12-10 22:00:59.239374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:51.706 [2024-12-10 22:00:59.239385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:51.706 [2024-12-10 22:00:59.239399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:51.706 [2024-12-10 22:00:59.239409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:51.706 [2024-12-10 22:00:59.239423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:51.706 [2024-12-10 22:00:59.239433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:51.706 [2024-12-10 22:00:59.239447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:51.706 [2024-12-10 22:00:59.239457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:51.706 [2024-12-10 22:00:59.239471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:51.706 [2024-12-10 22:00:59.239482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:51.706 [2024-12-10 22:00:59.239498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:51.706 [2024-12-10 22:00:59.239509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:51.706 [2024-12-10 22:00:59.239523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:51.706 [2024-12-10 22:00:59.239534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:51.706 [2024-12-10 22:00:59.239546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:51.706 [2024-12-10 22:00:59.239567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:51.706 [2024-12-10 22:00:59.239580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:51.706 [2024-12-10 22:00:59.239590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:51.706 [2024-12-10 22:00:59.239602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:51.706 [2024-12-10 22:00:59.239613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:51.706 [2024-12-10 22:00:59.239626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:51.706 [2024-12-10 22:00:59.239636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:51.706 [2024-12-10 22:00:59.239650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:51.706 [2024-12-10 22:00:59.239666] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:51.706 [2024-12-10 22:00:59.239678] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b02637a8-7852-4645-9f76-e532285a8360 00:29:51.706 [2024-12-10 22:00:59.239689] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:51.706 [2024-12-10 22:00:59.239704] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:51.706 [2024-12-10 22:00:59.239716] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:51.706 [2024-12-10 22:00:59.239730] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:51.706 [2024-12-10 22:00:59.239739] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:51.706 [2024-12-10 22:00:59.239752] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:51.706 [2024-12-10 22:00:59.239762] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:51.706 [2024-12-10 22:00:59.239774] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:51.706 [2024-12-10 22:00:59.239783] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:51.706 [2024-12-10 22:00:59.239795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.706 [2024-12-10 22:00:59.239805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:51.706 [2024-12-10 22:00:59.239818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.478 ms 00:29:51.706 [2024-12-10 22:00:59.239827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.706 [2024-12-10 22:00:59.259488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.706 [2024-12-10 22:00:59.259524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:51.706 [2024-12-10 22:00:59.259540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.638 ms 00:29:51.706 [2024-12-10 22:00:59.259550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.706 [2024-12-10 22:00:59.260113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.706 [2024-12-10 22:00:59.260126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:51.706 [2024-12-10 22:00:59.260139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:29:51.706 [2024-12-10 22:00:59.260150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.706 [2024-12-10 22:00:59.325226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.706 [2024-12-10 22:00:59.325264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:51.706 [2024-12-10 22:00:59.325280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.706 [2024-12-10 22:00:59.325291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.706 [2024-12-10 22:00:59.325359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.706 [2024-12-10 22:00:59.325370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:51.706 [2024-12-10 22:00:59.325384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.706 [2024-12-10 22:00:59.325393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.706 [2024-12-10 22:00:59.325476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.706 [2024-12-10 22:00:59.325494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:51.706 [2024-12-10 22:00:59.325507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.706 [2024-12-10 22:00:59.325518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.706 [2024-12-10 22:00:59.325543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.706 [2024-12-10 22:00:59.325553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:51.706 [2024-12-10 22:00:59.325566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.706 [2024-12-10 22:00:59.325576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.966 [2024-12-10 22:00:59.449966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.966 [2024-12-10 22:00:59.450039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:51.966 [2024-12-10 22:00:59.450073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.966 [2024-12-10 22:00:59.450084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.966 [2024-12-10 22:00:59.546639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.966 [2024-12-10 22:00:59.546693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:51.966 [2024-12-10 22:00:59.546712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.966 [2024-12-10 22:00:59.546723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.966 [2024-12-10 22:00:59.546860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.966 [2024-12-10 22:00:59.546873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:51.966 [2024-12-10 22:00:59.546891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.966 [2024-12-10 22:00:59.546902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.966 [2024-12-10 22:00:59.546960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.966 [2024-12-10 22:00:59.546973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:51.966 [2024-12-10 22:00:59.546987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.966 [2024-12-10 22:00:59.546998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.966 [2024-12-10 22:00:59.547144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.966 [2024-12-10 22:00:59.547160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:51.966 [2024-12-10 22:00:59.547173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.966 [2024-12-10 22:00:59.547186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.966 [2024-12-10 22:00:59.547233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.966 [2024-12-10 22:00:59.547247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:51.966 [2024-12-10 22:00:59.547260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.966 [2024-12-10 22:00:59.547270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.966 [2024-12-10 22:00:59.547319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.966 [2024-12-10 22:00:59.547332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:51.966 [2024-12-10 22:00:59.547344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.966 [2024-12-10 22:00:59.547358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.966 [2024-12-10 22:00:59.547415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.966 [2024-12-10 22:00:59.547427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:51.966 [2024-12-10 22:00:59.547441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.966 [2024-12-10 22:00:59.547451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.966 [2024-12-10 22:00:59.547614] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 540.791 ms, result 0 00:29:51.966 true 00:29:51.966 22:00:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 82732 00:29:51.966 22:00:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid82732 00:29:51.966 22:00:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:29:51.966 [2024-12-10 22:00:59.666652] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:29:51.966 [2024-12-10 22:00:59.666956] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83686 ] 00:29:52.225 [2024-12-10 22:00:59.852121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.484 [2024-12-10 22:00:59.975509] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.863  [2024-12-10T22:01:02.532Z] Copying: 209/1024 [MB] (209 MBps) [2024-12-10T22:01:03.471Z] Copying: 426/1024 [MB] (216 MBps) [2024-12-10T22:01:04.408Z] Copying: 642/1024 [MB] (216 MBps) [2024-12-10T22:01:05.345Z] Copying: 853/1024 [MB] (210 MBps) [2024-12-10T22:01:06.281Z] Copying: 1024/1024 [MB] (average 213 MBps) 00:29:58.550 00:29:58.809 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 82732 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:29:58.809 22:01:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:58.809 [2024-12-10 22:01:06.405880] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:29:58.809 [2024-12-10 22:01:06.406006] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83761 ] 00:29:59.069 [2024-12-10 22:01:06.585208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.069 [2024-12-10 22:01:06.704500] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.637 [2024-12-10 22:01:07.084269] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:59.637 [2024-12-10 22:01:07.084561] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:59.637 [2024-12-10 22:01:07.150685] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:29:59.637 [2024-12-10 22:01:07.151010] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:29:59.637 [2024-12-10 22:01:07.151254] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:29:59.898 [2024-12-10 22:01:07.481550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.898 [2024-12-10 22:01:07.481601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:59.898 [2024-12-10 22:01:07.481617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:59.898 [2024-12-10 22:01:07.481632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.898 [2024-12-10 22:01:07.481684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.898 [2024-12-10 22:01:07.481696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:59.898 [2024-12-10 22:01:07.481706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:29:59.898 [2024-12-10 22:01:07.481716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.898 [2024-12-10 22:01:07.481736] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:59.898 [2024-12-10 22:01:07.482710] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:59.898 [2024-12-10 22:01:07.482740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.898 [2024-12-10 22:01:07.482752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:59.898 [2024-12-10 22:01:07.482763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.009 ms 00:29:59.898 [2024-12-10 22:01:07.482773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.898 [2024-12-10 22:01:07.484569] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:59.898 [2024-12-10 22:01:07.503476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.898 [2024-12-10 22:01:07.503516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:59.898 [2024-12-10 22:01:07.503529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.938 ms 00:29:59.898 [2024-12-10 22:01:07.503539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.898 [2024-12-10 22:01:07.503621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.898 [2024-12-10 22:01:07.503634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:59.898 [2024-12-10 22:01:07.503646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:29:59.898 [2024-12-10 22:01:07.503656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.898 [2024-12-10 22:01:07.511626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.898 [2024-12-10 22:01:07.511656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:59.898 [2024-12-10 22:01:07.511684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.908 ms 00:29:59.898 [2024-12-10 22:01:07.511695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.898 [2024-12-10 22:01:07.511780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.898 [2024-12-10 22:01:07.511793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:59.898 [2024-12-10 22:01:07.511804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:29:59.898 [2024-12-10 22:01:07.511814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.898 [2024-12-10 22:01:07.511858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.898 [2024-12-10 22:01:07.511870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:59.898 [2024-12-10 22:01:07.511880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:59.898 [2024-12-10 22:01:07.511891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.898 [2024-12-10 22:01:07.511916] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:59.898 [2024-12-10 22:01:07.516618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.898 [2024-12-10 22:01:07.516651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:59.898 [2024-12-10 22:01:07.516664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.716 ms 00:29:59.898 [2024-12-10 22:01:07.516675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.898 [2024-12-10 22:01:07.516726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.898 [2024-12-10 22:01:07.516739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:59.898 [2024-12-10 22:01:07.516750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:59.898 [2024-12-10 22:01:07.516760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.898 [2024-12-10 22:01:07.516820] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:59.898 [2024-12-10 22:01:07.516847] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:59.898 [2024-12-10 22:01:07.516885] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:59.898 [2024-12-10 22:01:07.516903] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:59.898 [2024-12-10 22:01:07.516995] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:59.898 [2024-12-10 22:01:07.517009] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:59.898 [2024-12-10 22:01:07.517022] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:59.898 [2024-12-10 22:01:07.517039] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:59.898 [2024-12-10 22:01:07.517052] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:59.898 [2024-12-10 22:01:07.517077] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:59.898 [2024-12-10 22:01:07.517089] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:59.898 [2024-12-10 22:01:07.517099] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:59.898 [2024-12-10 22:01:07.517110] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:59.898 [2024-12-10 22:01:07.517121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.898 [2024-12-10 22:01:07.517131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:59.898 [2024-12-10 22:01:07.517142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:29:59.898 [2024-12-10 22:01:07.517152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.898 [2024-12-10 22:01:07.517229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.898 [2024-12-10 22:01:07.517245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:59.898 [2024-12-10 22:01:07.517257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:29:59.898 [2024-12-10 22:01:07.517267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.898 [2024-12-10 22:01:07.517353] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:59.898 [2024-12-10 22:01:07.517366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:59.898 [2024-12-10 22:01:07.517377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:59.898 [2024-12-10 22:01:07.517387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:59.898 [2024-12-10 22:01:07.517398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:59.898 [2024-12-10 22:01:07.517407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:59.898 [2024-12-10 22:01:07.517418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:59.898 [2024-12-10 22:01:07.517427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:59.898 [2024-12-10 22:01:07.517437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:59.898 [2024-12-10 22:01:07.517458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:59.898 [2024-12-10 22:01:07.517470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:59.898 [2024-12-10 22:01:07.517479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:59.898 [2024-12-10 22:01:07.517489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:59.898 [2024-12-10 22:01:07.517499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:59.898 [2024-12-10 22:01:07.517509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:59.898 [2024-12-10 22:01:07.517518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:59.899 [2024-12-10 22:01:07.517528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:59.899 [2024-12-10 22:01:07.517538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:59.899 [2024-12-10 22:01:07.517547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:59.899 [2024-12-10 22:01:07.517556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:59.899 [2024-12-10 22:01:07.517566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:59.899 [2024-12-10 22:01:07.517575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:59.899 [2024-12-10 22:01:07.517584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:59.899 [2024-12-10 22:01:07.517593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:59.899 [2024-12-10 22:01:07.517603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:59.899 [2024-12-10 22:01:07.517612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:59.899 [2024-12-10 22:01:07.517621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:59.899 [2024-12-10 22:01:07.517630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:59.899 [2024-12-10 22:01:07.517640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:59.899 [2024-12-10 22:01:07.517650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:59.899 [2024-12-10 22:01:07.517659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:59.899 [2024-12-10 22:01:07.517668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:59.899 [2024-12-10 22:01:07.517677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:59.899 [2024-12-10 22:01:07.517686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:59.899 [2024-12-10 22:01:07.517696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:59.899 [2024-12-10 22:01:07.517705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:59.899 [2024-12-10 22:01:07.517714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:59.899 [2024-12-10 22:01:07.517724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:59.899 [2024-12-10 22:01:07.517734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:59.899 [2024-12-10 22:01:07.517743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:59.899 [2024-12-10 22:01:07.517752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:59.899 [2024-12-10 22:01:07.517761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:59.899 [2024-12-10 22:01:07.517772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:59.899 [2024-12-10 22:01:07.517780] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:59.899 [2024-12-10 22:01:07.517791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:59.899 [2024-12-10 22:01:07.517805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:59.899 [2024-12-10 22:01:07.517815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:59.899 [2024-12-10 22:01:07.517825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:59.899 [2024-12-10 22:01:07.517836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:59.899 [2024-12-10 22:01:07.517845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:59.899 [2024-12-10 22:01:07.517855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:59.899 [2024-12-10 22:01:07.517864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:59.899 [2024-12-10 22:01:07.517873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:59.899 [2024-12-10 22:01:07.517884] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:59.899 [2024-12-10 22:01:07.517897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:59.899 [2024-12-10 22:01:07.517908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:59.899 [2024-12-10 22:01:07.517920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:59.899 [2024-12-10 22:01:07.517931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:59.899 [2024-12-10 22:01:07.517941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:59.899 [2024-12-10 22:01:07.517951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:59.899 [2024-12-10 22:01:07.517962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:59.899 [2024-12-10 22:01:07.517972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:59.899 [2024-12-10 22:01:07.517983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:59.899 [2024-12-10 22:01:07.517993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:59.899 [2024-12-10 22:01:07.518003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:59.899 [2024-12-10 22:01:07.518013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:59.899 [2024-12-10 22:01:07.518023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:59.899 [2024-12-10 22:01:07.518034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:59.899 [2024-12-10 22:01:07.518044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:59.899 [2024-12-10 22:01:07.518066] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:59.899 [2024-12-10 22:01:07.518078] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:59.899 [2024-12-10 22:01:07.518091] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:59.899 [2024-12-10 22:01:07.518102] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:59.899 [2024-12-10 22:01:07.518113] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:59.899 [2024-12-10 22:01:07.518124] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:59.899 [2024-12-10 22:01:07.518135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.899 [2024-12-10 22:01:07.518146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:59.899 [2024-12-10 22:01:07.518156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.838 ms 00:29:59.899 [2024-12-10 22:01:07.518166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.899 [2024-12-10 22:01:07.559608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.899 [2024-12-10 22:01:07.559649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:59.899 [2024-12-10 22:01:07.559665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.455 ms 00:29:59.899 [2024-12-10 22:01:07.559684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.899 [2024-12-10 22:01:07.559772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.899 [2024-12-10 22:01:07.559784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:59.899 [2024-12-10 22:01:07.559796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:29:59.899 [2024-12-10 22:01:07.559806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.899 [2024-12-10 22:01:07.617863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.899 [2024-12-10 22:01:07.617911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:59.899 [2024-12-10 22:01:07.617931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.091 ms 00:29:59.899 [2024-12-10 22:01:07.617958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.899 [2024-12-10 22:01:07.618006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.899 [2024-12-10 22:01:07.618018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:59.899 [2024-12-10 22:01:07.618029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:59.899 [2024-12-10 22:01:07.618040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.899 [2024-12-10 22:01:07.618862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.899 [2024-12-10 22:01:07.618886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:59.899 [2024-12-10 22:01:07.618898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.735 ms 00:29:59.899 [2024-12-10 22:01:07.618912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.899 [2024-12-10 22:01:07.619039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.899 [2024-12-10 22:01:07.619069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:59.899 [2024-12-10 22:01:07.619080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:29:59.899 [2024-12-10 22:01:07.619091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.159 [2024-12-10 22:01:07.639372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.159 [2024-12-10 22:01:07.639413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:00.159 [2024-12-10 22:01:07.639428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.293 ms 00:30:00.159 [2024-12-10 22:01:07.639439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.159 [2024-12-10 22:01:07.658488] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:00.159 [2024-12-10 22:01:07.658530] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:00.159 [2024-12-10 22:01:07.658547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.159 [2024-12-10 22:01:07.658575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:00.159 [2024-12-10 22:01:07.658588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.001 ms 00:30:00.159 [2024-12-10 22:01:07.658599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.159 [2024-12-10 22:01:07.687306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.159 [2024-12-10 22:01:07.687344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:00.159 [2024-12-10 22:01:07.687374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.708 ms 00:30:00.159 [2024-12-10 22:01:07.687386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.159 [2024-12-10 22:01:07.705900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.159 [2024-12-10 22:01:07.705942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:00.159 [2024-12-10 22:01:07.705956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.498 ms 00:30:00.159 [2024-12-10 22:01:07.705966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.159 [2024-12-10 22:01:07.723644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.159 [2024-12-10 22:01:07.723683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:00.159 [2024-12-10 22:01:07.723695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.645 ms 00:30:00.159 [2024-12-10 22:01:07.723722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.159 [2024-12-10 22:01:07.724544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.159 [2024-12-10 22:01:07.724576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:00.159 [2024-12-10 22:01:07.724589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.705 ms 00:30:00.159 [2024-12-10 22:01:07.724600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.159 [2024-12-10 22:01:07.823470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.159 [2024-12-10 22:01:07.823545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:00.159 [2024-12-10 22:01:07.823579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.006 ms 00:30:00.159 [2024-12-10 22:01:07.823591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.159 [2024-12-10 22:01:07.834070] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:00.160 [2024-12-10 22:01:07.836637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.160 [2024-12-10 22:01:07.836666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:00.160 [2024-12-10 22:01:07.836695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.005 ms 00:30:00.160 [2024-12-10 22:01:07.836712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.160 [2024-12-10 22:01:07.836828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.160 [2024-12-10 22:01:07.836842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:00.160 [2024-12-10 22:01:07.836853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:00.160 [2024-12-10 22:01:07.836863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.160 [2024-12-10 22:01:07.836944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.160 [2024-12-10 22:01:07.836956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:00.160 [2024-12-10 22:01:07.836967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:30:00.160 [2024-12-10 22:01:07.836978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.160 [2024-12-10 22:01:07.837022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.160 [2024-12-10 22:01:07.837034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:00.160 [2024-12-10 22:01:07.837044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:00.160 [2024-12-10 22:01:07.837055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.160 [2024-12-10 22:01:07.837113] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:00.160 [2024-12-10 22:01:07.837128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.160 [2024-12-10 22:01:07.837139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:00.160 [2024-12-10 22:01:07.837150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:30:00.160 [2024-12-10 22:01:07.837165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.160 [2024-12-10 22:01:07.873597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.160 [2024-12-10 22:01:07.873640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:00.160 [2024-12-10 22:01:07.873656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.469 ms 00:30:00.160 [2024-12-10 22:01:07.873668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.160 [2024-12-10 22:01:07.873751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:00.160 [2024-12-10 22:01:07.873764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:00.160 [2024-12-10 22:01:07.873776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:30:00.160 [2024-12-10 22:01:07.873787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:00.160 [2024-12-10 22:01:07.875026] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 393.632 ms, result 0 00:30:01.538  [2024-12-10T22:01:10.207Z] Copying: 22/1024 [MB] (22 MBps) [2024-12-10T22:01:11.213Z] Copying: 45/1024 [MB] (22 MBps) [2024-12-10T22:01:12.148Z] Copying: 68/1024 [MB] (22 MBps) [2024-12-10T22:01:13.084Z] Copying: 90/1024 [MB] (22 MBps) [2024-12-10T22:01:14.020Z] Copying: 113/1024 [MB] (22 MBps) [2024-12-10T22:01:14.956Z] Copying: 136/1024 [MB] (22 MBps) [2024-12-10T22:01:15.893Z] Copying: 158/1024 [MB] (22 MBps) [2024-12-10T22:01:17.270Z] Copying: 181/1024 [MB] (22 MBps) [2024-12-10T22:01:18.208Z] Copying: 203/1024 [MB] (22 MBps) [2024-12-10T22:01:19.145Z] Copying: 226/1024 [MB] (22 MBps) [2024-12-10T22:01:20.091Z] Copying: 249/1024 [MB] (22 MBps) [2024-12-10T22:01:21.029Z] Copying: 272/1024 [MB] (23 MBps) [2024-12-10T22:01:21.965Z] Copying: 294/1024 [MB] (22 MBps) [2024-12-10T22:01:22.901Z] Copying: 316/1024 [MB] (21 MBps) [2024-12-10T22:01:24.279Z] Copying: 339/1024 [MB] (22 MBps) [2024-12-10T22:01:25.217Z] Copying: 360/1024 [MB] (20 MBps) [2024-12-10T22:01:26.154Z] Copying: 381/1024 [MB] (21 MBps) [2024-12-10T22:01:27.090Z] Copying: 403/1024 [MB] (21 MBps) [2024-12-10T22:01:28.027Z] Copying: 424/1024 [MB] (21 MBps) [2024-12-10T22:01:28.964Z] Copying: 445/1024 [MB] (21 MBps) [2024-12-10T22:01:29.901Z] Copying: 468/1024 [MB] (22 MBps) [2024-12-10T22:01:31.278Z] Copying: 490/1024 [MB] (21 MBps) [2024-12-10T22:01:32.216Z] Copying: 512/1024 [MB] (22 MBps) [2024-12-10T22:01:33.152Z] Copying: 533/1024 [MB] (20 MBps) [2024-12-10T22:01:34.132Z] Copying: 554/1024 [MB] (21 MBps) [2024-12-10T22:01:35.069Z] Copying: 575/1024 [MB] (20 MBps) [2024-12-10T22:01:36.005Z] Copying: 596/1024 [MB] (20 MBps) [2024-12-10T22:01:36.942Z] Copying: 617/1024 [MB] (21 MBps) [2024-12-10T22:01:37.880Z] Copying: 640/1024 [MB] (22 MBps) [2024-12-10T22:01:39.257Z] Copying: 663/1024 [MB] (22 MBps) [2024-12-10T22:01:40.191Z] Copying: 684/1024 [MB] (21 MBps) [2024-12-10T22:01:41.129Z] Copying: 705/1024 [MB] (20 MBps) [2024-12-10T22:01:42.067Z] Copying: 726/1024 [MB] (20 MBps) [2024-12-10T22:01:43.005Z] Copying: 747/1024 [MB] (21 MBps) [2024-12-10T22:01:43.943Z] Copying: 769/1024 [MB] (22 MBps) [2024-12-10T22:01:44.881Z] Copying: 791/1024 [MB] (21 MBps) [2024-12-10T22:01:46.266Z] Copying: 813/1024 [MB] (22 MBps) [2024-12-10T22:01:46.835Z] Copying: 835/1024 [MB] (21 MBps) [2024-12-10T22:01:48.214Z] Copying: 856/1024 [MB] (21 MBps) [2024-12-10T22:01:49.152Z] Copying: 878/1024 [MB] (22 MBps) [2024-12-10T22:01:50.090Z] Copying: 900/1024 [MB] (21 MBps) [2024-12-10T22:01:51.029Z] Copying: 921/1024 [MB] (21 MBps) [2024-12-10T22:01:51.966Z] Copying: 943/1024 [MB] (21 MBps) [2024-12-10T22:01:52.904Z] Copying: 966/1024 [MB] (22 MBps) [2024-12-10T22:01:53.842Z] Copying: 988/1024 [MB] (22 MBps) [2024-12-10T22:01:55.224Z] Copying: 1010/1024 [MB] (22 MBps) [2024-12-10T22:01:55.224Z] Copying: 1023/1024 [MB] (13 MBps) [2024-12-10T22:01:55.224Z] Copying: 1024/1024 [MB] (average 21 MBps)[2024-12-10 22:01:54.982029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.493 [2024-12-10 22:01:54.982126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:47.493 [2024-12-10 22:01:54.982146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:47.493 [2024-12-10 22:01:54.982158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.493 [2024-12-10 22:01:54.983893] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:47.493 [2024-12-10 22:01:54.990401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.493 [2024-12-10 22:01:54.990435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:47.493 [2024-12-10 22:01:54.990456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.474 ms 00:30:47.493 [2024-12-10 22:01:54.990474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.493 [2024-12-10 22:01:55.001110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.493 [2024-12-10 22:01:55.001155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:47.493 [2024-12-10 22:01:55.001185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.784 ms 00:30:47.493 [2024-12-10 22:01:55.001196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.493 [2024-12-10 22:01:55.023561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.493 [2024-12-10 22:01:55.023609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:47.493 [2024-12-10 22:01:55.023624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.382 ms 00:30:47.493 [2024-12-10 22:01:55.023635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.494 [2024-12-10 22:01:55.028573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.494 [2024-12-10 22:01:55.028605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:47.494 [2024-12-10 22:01:55.028618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.898 ms 00:30:47.494 [2024-12-10 22:01:55.028627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.494 [2024-12-10 22:01:55.063544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.494 [2024-12-10 22:01:55.063580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:47.494 [2024-12-10 22:01:55.063609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.923 ms 00:30:47.494 [2024-12-10 22:01:55.063620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.494 [2024-12-10 22:01:55.083484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.494 [2024-12-10 22:01:55.083522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:47.494 [2024-12-10 22:01:55.083551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.859 ms 00:30:47.494 [2024-12-10 22:01:55.083562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.494 [2024-12-10 22:01:55.183899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.494 [2024-12-10 22:01:55.183956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:47.494 [2024-12-10 22:01:55.183977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.459 ms 00:30:47.494 [2024-12-10 22:01:55.183988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.494 [2024-12-10 22:01:55.218732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.494 [2024-12-10 22:01:55.218771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:47.494 [2024-12-10 22:01:55.218783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.782 ms 00:30:47.494 [2024-12-10 22:01:55.218823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.755 [2024-12-10 22:01:55.253144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.755 [2024-12-10 22:01:55.253182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:47.755 [2024-12-10 22:01:55.253210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.340 ms 00:30:47.755 [2024-12-10 22:01:55.253220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.755 [2024-12-10 22:01:55.286209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.755 [2024-12-10 22:01:55.286244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:47.755 [2024-12-10 22:01:55.286271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.005 ms 00:30:47.755 [2024-12-10 22:01:55.286281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.755 [2024-12-10 22:01:55.319239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.755 [2024-12-10 22:01:55.319273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:47.755 [2024-12-10 22:01:55.319285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.936 ms 00:30:47.755 [2024-12-10 22:01:55.319294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.755 [2024-12-10 22:01:55.319346] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:47.755 [2024-12-10 22:01:55.319362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 84224 / 261120 wr_cnt: 1 state: open 00:30:47.755 [2024-12-10 22:01:55.319375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.319995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.320006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.320016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.320027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.320040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.320051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.320072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:47.755 [2024-12-10 22:01:55.320084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:47.756 [2024-12-10 22:01:55.320095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:47.756 [2024-12-10 22:01:55.320106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:47.756 [2024-12-10 22:01:55.320117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:47.756 [2024-12-10 22:01:55.320128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:47.756 [2024-12-10 22:01:55.320139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:47.756 [2024-12-10 22:01:55.320149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:47.756 [2024-12-10 22:01:55.320161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:47.756 [2024-12-10 22:01:55.320172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:47.756 [2024-12-10 22:01:55.320182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:47.756 [2024-12-10 22:01:55.320192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:47.756 [2024-12-10 22:01:55.320202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:47.756 [2024-12-10 22:01:55.320212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:47.756 [2024-12-10 22:01:55.320223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:47.756 [2024-12-10 22:01:55.320233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:47.756 [2024-12-10 22:01:55.320243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:47.756 [2024-12-10 22:01:55.320253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:47.756 [2024-12-10 22:01:55.320263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:47.756 [2024-12-10 22:01:55.320274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:47.756 [2024-12-10 22:01:55.320285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:47.756 [2024-12-10 22:01:55.320295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:47.756 [2024-12-10 22:01:55.320305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:47.756 [2024-12-10 22:01:55.320315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:47.756 [2024-12-10 22:01:55.320326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:47.756 [2024-12-10 22:01:55.320336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:47.756 [2024-12-10 22:01:55.320346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:47.756 [2024-12-10 22:01:55.320357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:47.756 [2024-12-10 22:01:55.320368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:47.756 [2024-12-10 22:01:55.320380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:47.756 [2024-12-10 22:01:55.320392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:47.756 [2024-12-10 22:01:55.320402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:47.756 [2024-12-10 22:01:55.320413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:47.756 [2024-12-10 22:01:55.320425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:47.756 [2024-12-10 22:01:55.320436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:47.756 [2024-12-10 22:01:55.320447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:47.756 [2024-12-10 22:01:55.320466] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:47.756 [2024-12-10 22:01:55.320476] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b02637a8-7852-4645-9f76-e532285a8360 00:30:47.756 [2024-12-10 22:01:55.320503] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 84224 00:30:47.756 [2024-12-10 22:01:55.320513] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 85184 00:30:47.756 [2024-12-10 22:01:55.320523] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 84224 00:30:47.756 [2024-12-10 22:01:55.320534] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0114 00:30:47.756 [2024-12-10 22:01:55.320544] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:47.756 [2024-12-10 22:01:55.320554] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:47.756 [2024-12-10 22:01:55.320564] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:47.756 [2024-12-10 22:01:55.320573] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:47.756 [2024-12-10 22:01:55.320583] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:47.756 [2024-12-10 22:01:55.320593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.756 [2024-12-10 22:01:55.320604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:47.756 [2024-12-10 22:01:55.320614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.250 ms 00:30:47.756 [2024-12-10 22:01:55.320624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.756 [2024-12-10 22:01:55.340129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.756 [2024-12-10 22:01:55.340165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:47.756 [2024-12-10 22:01:55.340193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.504 ms 00:30:47.756 [2024-12-10 22:01:55.340204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.756 [2024-12-10 22:01:55.340762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.756 [2024-12-10 22:01:55.340780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:47.756 [2024-12-10 22:01:55.340791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:30:47.756 [2024-12-10 22:01:55.340808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.756 [2024-12-10 22:01:55.392530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:47.756 [2024-12-10 22:01:55.392569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:47.756 [2024-12-10 22:01:55.392583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:47.756 [2024-12-10 22:01:55.392594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.756 [2024-12-10 22:01:55.392671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:47.756 [2024-12-10 22:01:55.392683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:47.756 [2024-12-10 22:01:55.392694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:47.756 [2024-12-10 22:01:55.392709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.756 [2024-12-10 22:01:55.392803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:47.756 [2024-12-10 22:01:55.392818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:47.756 [2024-12-10 22:01:55.392829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:47.756 [2024-12-10 22:01:55.392840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.756 [2024-12-10 22:01:55.392857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:47.756 [2024-12-10 22:01:55.392869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:47.756 [2024-12-10 22:01:55.392879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:47.756 [2024-12-10 22:01:55.392889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.016 [2024-12-10 22:01:55.514004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:48.016 [2024-12-10 22:01:55.514090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:48.016 [2024-12-10 22:01:55.514106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:48.016 [2024-12-10 22:01:55.514118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.016 [2024-12-10 22:01:55.611598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:48.016 [2024-12-10 22:01:55.611658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:48.016 [2024-12-10 22:01:55.611673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:48.016 [2024-12-10 22:01:55.611707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.016 [2024-12-10 22:01:55.611815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:48.016 [2024-12-10 22:01:55.611828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:48.016 [2024-12-10 22:01:55.611839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:48.016 [2024-12-10 22:01:55.611849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.016 [2024-12-10 22:01:55.611890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:48.016 [2024-12-10 22:01:55.611901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:48.016 [2024-12-10 22:01:55.611912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:48.016 [2024-12-10 22:01:55.611922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.016 [2024-12-10 22:01:55.612041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:48.016 [2024-12-10 22:01:55.612054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:48.016 [2024-12-10 22:01:55.612081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:48.016 [2024-12-10 22:01:55.612093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.016 [2024-12-10 22:01:55.612136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:48.016 [2024-12-10 22:01:55.612149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:48.016 [2024-12-10 22:01:55.612160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:48.016 [2024-12-10 22:01:55.612169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.016 [2024-12-10 22:01:55.612216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:48.016 [2024-12-10 22:01:55.612228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:48.016 [2024-12-10 22:01:55.612238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:48.016 [2024-12-10 22:01:55.612248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.016 [2024-12-10 22:01:55.612293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:48.016 [2024-12-10 22:01:55.612305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:48.016 [2024-12-10 22:01:55.612315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:48.016 [2024-12-10 22:01:55.612325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.016 [2024-12-10 22:01:55.612456] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 632.905 ms, result 0 00:30:49.960 00:30:49.960 00:30:49.960 22:01:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:30:51.339 22:01:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:51.339 [2024-12-10 22:01:59.057321] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:30:51.339 [2024-12-10 22:01:59.057462] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84276 ] 00:30:51.599 [2024-12-10 22:01:59.243209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:51.858 [2024-12-10 22:01:59.360147] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:52.117 [2024-12-10 22:01:59.728962] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:52.117 [2024-12-10 22:01:59.729037] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:52.378 [2024-12-10 22:01:59.892474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.378 [2024-12-10 22:01:59.892532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:52.378 [2024-12-10 22:01:59.892548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:52.378 [2024-12-10 22:01:59.892558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.378 [2024-12-10 22:01:59.892623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.378 [2024-12-10 22:01:59.892639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:52.378 [2024-12-10 22:01:59.892650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:30:52.378 [2024-12-10 22:01:59.892660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.378 [2024-12-10 22:01:59.892682] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:52.378 [2024-12-10 22:01:59.893665] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:52.378 [2024-12-10 22:01:59.893697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.378 [2024-12-10 22:01:59.893709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:52.378 [2024-12-10 22:01:59.893721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.021 ms 00:30:52.378 [2024-12-10 22:01:59.893731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.378 [2024-12-10 22:01:59.895519] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:52.378 [2024-12-10 22:01:59.914305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.378 [2024-12-10 22:01:59.914343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:52.378 [2024-12-10 22:01:59.914357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.818 ms 00:30:52.378 [2024-12-10 22:01:59.914367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.378 [2024-12-10 22:01:59.914458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.378 [2024-12-10 22:01:59.914471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:52.378 [2024-12-10 22:01:59.914482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:30:52.378 [2024-12-10 22:01:59.914492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.378 [2024-12-10 22:01:59.922894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.378 [2024-12-10 22:01:59.922922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:52.378 [2024-12-10 22:01:59.922935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.342 ms 00:30:52.378 [2024-12-10 22:01:59.922949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.378 [2024-12-10 22:01:59.923049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.378 [2024-12-10 22:01:59.923071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:52.378 [2024-12-10 22:01:59.923084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:30:52.378 [2024-12-10 22:01:59.923094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.378 [2024-12-10 22:01:59.923136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.378 [2024-12-10 22:01:59.923147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:52.378 [2024-12-10 22:01:59.923158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:52.378 [2024-12-10 22:01:59.923168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.378 [2024-12-10 22:01:59.923197] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:52.378 [2024-12-10 22:01:59.927943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.378 [2024-12-10 22:01:59.927973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:52.378 [2024-12-10 22:01:59.928005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.760 ms 00:30:52.378 [2024-12-10 22:01:59.928014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.378 [2024-12-10 22:01:59.928049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.378 [2024-12-10 22:01:59.928070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:52.378 [2024-12-10 22:01:59.928082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:52.378 [2024-12-10 22:01:59.928092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.378 [2024-12-10 22:01:59.928142] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:52.378 [2024-12-10 22:01:59.928173] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:52.378 [2024-12-10 22:01:59.928207] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:52.378 [2024-12-10 22:01:59.928229] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:52.378 [2024-12-10 22:01:59.928317] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:52.378 [2024-12-10 22:01:59.928330] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:52.378 [2024-12-10 22:01:59.928343] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:52.378 [2024-12-10 22:01:59.928355] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:52.378 [2024-12-10 22:01:59.928383] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:52.378 [2024-12-10 22:01:59.928395] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:52.378 [2024-12-10 22:01:59.928405] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:52.378 [2024-12-10 22:01:59.928415] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:52.379 [2024-12-10 22:01:59.928429] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:52.379 [2024-12-10 22:01:59.928441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.379 [2024-12-10 22:01:59.928450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:52.379 [2024-12-10 22:01:59.928460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:30:52.379 [2024-12-10 22:01:59.928471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.379 [2024-12-10 22:01:59.928545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.379 [2024-12-10 22:01:59.928557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:52.379 [2024-12-10 22:01:59.928568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:30:52.379 [2024-12-10 22:01:59.928578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.379 [2024-12-10 22:01:59.928669] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:52.379 [2024-12-10 22:01:59.928682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:52.379 [2024-12-10 22:01:59.928694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:52.379 [2024-12-10 22:01:59.928704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:52.379 [2024-12-10 22:01:59.928715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:52.379 [2024-12-10 22:01:59.928724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:52.379 [2024-12-10 22:01:59.928734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:52.379 [2024-12-10 22:01:59.928743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:52.379 [2024-12-10 22:01:59.928752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:52.379 [2024-12-10 22:01:59.928761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:52.379 [2024-12-10 22:01:59.928772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:52.379 [2024-12-10 22:01:59.928782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:52.379 [2024-12-10 22:01:59.928792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:52.379 [2024-12-10 22:01:59.928813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:52.379 [2024-12-10 22:01:59.928823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:52.379 [2024-12-10 22:01:59.928832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:52.379 [2024-12-10 22:01:59.928841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:52.379 [2024-12-10 22:01:59.928851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:52.379 [2024-12-10 22:01:59.928860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:52.379 [2024-12-10 22:01:59.928870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:52.379 [2024-12-10 22:01:59.928879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:52.379 [2024-12-10 22:01:59.928889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:52.379 [2024-12-10 22:01:59.928898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:52.379 [2024-12-10 22:01:59.928908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:52.379 [2024-12-10 22:01:59.928917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:52.379 [2024-12-10 22:01:59.928926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:52.379 [2024-12-10 22:01:59.928936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:52.379 [2024-12-10 22:01:59.928945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:52.379 [2024-12-10 22:01:59.928955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:52.379 [2024-12-10 22:01:59.928964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:52.379 [2024-12-10 22:01:59.928973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:52.379 [2024-12-10 22:01:59.928982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:52.379 [2024-12-10 22:01:59.928992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:52.379 [2024-12-10 22:01:59.929001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:52.379 [2024-12-10 22:01:59.929010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:52.379 [2024-12-10 22:01:59.929019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:52.379 [2024-12-10 22:01:59.929028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:52.379 [2024-12-10 22:01:59.929036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:52.379 [2024-12-10 22:01:59.929045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:52.379 [2024-12-10 22:01:59.929054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:52.379 [2024-12-10 22:01:59.929074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:52.379 [2024-12-10 22:01:59.929084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:52.379 [2024-12-10 22:01:59.929095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:52.379 [2024-12-10 22:01:59.929104] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:52.379 [2024-12-10 22:01:59.929115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:52.379 [2024-12-10 22:01:59.929125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:52.379 [2024-12-10 22:01:59.929136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:52.379 [2024-12-10 22:01:59.929147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:52.379 [2024-12-10 22:01:59.929157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:52.379 [2024-12-10 22:01:59.929167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:52.379 [2024-12-10 22:01:59.929176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:52.379 [2024-12-10 22:01:59.929185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:52.379 [2024-12-10 22:01:59.929195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:52.379 [2024-12-10 22:01:59.929207] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:52.379 [2024-12-10 22:01:59.929220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:52.379 [2024-12-10 22:01:59.929237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:52.379 [2024-12-10 22:01:59.929248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:52.379 [2024-12-10 22:01:59.929259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:52.379 [2024-12-10 22:01:59.929270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:52.379 [2024-12-10 22:01:59.929280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:52.379 [2024-12-10 22:01:59.929291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:52.379 [2024-12-10 22:01:59.929301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:52.379 [2024-12-10 22:01:59.929311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:52.379 [2024-12-10 22:01:59.929321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:52.379 [2024-12-10 22:01:59.929333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:52.379 [2024-12-10 22:01:59.929343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:52.379 [2024-12-10 22:01:59.929353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:52.379 [2024-12-10 22:01:59.929363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:52.379 [2024-12-10 22:01:59.929373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:52.379 [2024-12-10 22:01:59.929383] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:52.379 [2024-12-10 22:01:59.929395] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:52.379 [2024-12-10 22:01:59.929406] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:52.379 [2024-12-10 22:01:59.929416] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:52.379 [2024-12-10 22:01:59.929426] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:52.379 [2024-12-10 22:01:59.929439] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:52.379 [2024-12-10 22:01:59.929450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.379 [2024-12-10 22:01:59.929461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:52.379 [2024-12-10 22:01:59.929471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.838 ms 00:30:52.379 [2024-12-10 22:01:59.929481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.379 [2024-12-10 22:01:59.968764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.379 [2024-12-10 22:01:59.968803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:52.379 [2024-12-10 22:01:59.968817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.298 ms 00:30:52.379 [2024-12-10 22:01:59.968850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.379 [2024-12-10 22:01:59.968927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.379 [2024-12-10 22:01:59.968938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:52.379 [2024-12-10 22:01:59.968949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:30:52.379 [2024-12-10 22:01:59.968958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.379 [2024-12-10 22:02:00.045780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.379 [2024-12-10 22:02:00.045822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:52.379 [2024-12-10 22:02:00.045836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.886 ms 00:30:52.379 [2024-12-10 22:02:00.045863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.379 [2024-12-10 22:02:00.045905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.379 [2024-12-10 22:02:00.045918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:52.380 [2024-12-10 22:02:00.045933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:52.380 [2024-12-10 22:02:00.045943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.380 [2024-12-10 22:02:00.046464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.380 [2024-12-10 22:02:00.046487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:52.380 [2024-12-10 22:02:00.046498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:30:52.380 [2024-12-10 22:02:00.046508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.380 [2024-12-10 22:02:00.046633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.380 [2024-12-10 22:02:00.046654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:52.380 [2024-12-10 22:02:00.046669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:30:52.380 [2024-12-10 22:02:00.046679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.380 [2024-12-10 22:02:00.066813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.380 [2024-12-10 22:02:00.066855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:52.380 [2024-12-10 22:02:00.066868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.144 ms 00:30:52.380 [2024-12-10 22:02:00.066896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.380 [2024-12-10 22:02:00.085448] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:30:52.380 [2024-12-10 22:02:00.085486] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:52.380 [2024-12-10 22:02:00.085501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.380 [2024-12-10 22:02:00.085529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:52.380 [2024-12-10 22:02:00.085541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.525 ms 00:30:52.380 [2024-12-10 22:02:00.085552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.639 [2024-12-10 22:02:00.114133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.639 [2024-12-10 22:02:00.114174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:52.639 [2024-12-10 22:02:00.114189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.580 ms 00:30:52.639 [2024-12-10 22:02:00.114198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.639 [2024-12-10 22:02:00.131340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.639 [2024-12-10 22:02:00.131373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:52.639 [2024-12-10 22:02:00.131386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.108 ms 00:30:52.639 [2024-12-10 22:02:00.131395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.639 [2024-12-10 22:02:00.148494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.639 [2024-12-10 22:02:00.148537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:52.639 [2024-12-10 22:02:00.148549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.071 ms 00:30:52.639 [2024-12-10 22:02:00.148558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.639 [2024-12-10 22:02:00.149349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.639 [2024-12-10 22:02:00.149380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:52.639 [2024-12-10 22:02:00.149397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.662 ms 00:30:52.639 [2024-12-10 22:02:00.149406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.639 [2024-12-10 22:02:00.230825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.639 [2024-12-10 22:02:00.230909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:52.639 [2024-12-10 22:02:00.230934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.527 ms 00:30:52.639 [2024-12-10 22:02:00.230946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.639 [2024-12-10 22:02:00.240772] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:52.639 [2024-12-10 22:02:00.243008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.639 [2024-12-10 22:02:00.243038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:52.639 [2024-12-10 22:02:00.243060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.031 ms 00:30:52.639 [2024-12-10 22:02:00.243071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.639 [2024-12-10 22:02:00.243152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.639 [2024-12-10 22:02:00.243166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:52.639 [2024-12-10 22:02:00.243178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:52.639 [2024-12-10 22:02:00.243193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.639 [2024-12-10 22:02:00.244889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.639 [2024-12-10 22:02:00.244938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:52.639 [2024-12-10 22:02:00.244967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.655 ms 00:30:52.639 [2024-12-10 22:02:00.244977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.639 [2024-12-10 22:02:00.245017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.639 [2024-12-10 22:02:00.245030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:52.639 [2024-12-10 22:02:00.245041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:52.639 [2024-12-10 22:02:00.245051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.639 [2024-12-10 22:02:00.245109] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:52.639 [2024-12-10 22:02:00.245123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.639 [2024-12-10 22:02:00.245134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:52.639 [2024-12-10 22:02:00.245146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:30:52.639 [2024-12-10 22:02:00.245156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.639 [2024-12-10 22:02:00.280732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.639 [2024-12-10 22:02:00.280771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:52.639 [2024-12-10 22:02:00.280808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.611 ms 00:30:52.639 [2024-12-10 22:02:00.280819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.639 [2024-12-10 22:02:00.280896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.639 [2024-12-10 22:02:00.280909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:52.639 [2024-12-10 22:02:00.280920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:30:52.639 [2024-12-10 22:02:00.280931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.639 [2024-12-10 22:02:00.282184] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 389.850 ms, result 0 00:30:54.016  [2024-12-10T22:02:02.685Z] Copying: 1292/1048576 [kB] (1292 kBps) [2024-12-10T22:02:03.624Z] Copying: 6060/1048576 [kB] (4768 kBps) [2024-12-10T22:02:04.562Z] Copying: 35/1024 [MB] (29 MBps) [2024-12-10T22:02:05.940Z] Copying: 66/1024 [MB] (30 MBps) [2024-12-10T22:02:06.509Z] Copying: 97/1024 [MB] (30 MBps) [2024-12-10T22:02:07.889Z] Copying: 128/1024 [MB] (31 MBps) [2024-12-10T22:02:08.827Z] Copying: 159/1024 [MB] (30 MBps) [2024-12-10T22:02:09.766Z] Copying: 189/1024 [MB] (30 MBps) [2024-12-10T22:02:10.705Z] Copying: 222/1024 [MB] (32 MBps) [2024-12-10T22:02:11.643Z] Copying: 253/1024 [MB] (31 MBps) [2024-12-10T22:02:12.581Z] Copying: 284/1024 [MB] (31 MBps) [2024-12-10T22:02:13.520Z] Copying: 315/1024 [MB] (31 MBps) [2024-12-10T22:02:14.899Z] Copying: 346/1024 [MB] (31 MBps) [2024-12-10T22:02:15.837Z] Copying: 378/1024 [MB] (31 MBps) [2024-12-10T22:02:16.776Z] Copying: 408/1024 [MB] (29 MBps) [2024-12-10T22:02:17.715Z] Copying: 438/1024 [MB] (30 MBps) [2024-12-10T22:02:18.653Z] Copying: 470/1024 [MB] (32 MBps) [2024-12-10T22:02:19.664Z] Copying: 501/1024 [MB] (30 MBps) [2024-12-10T22:02:20.601Z] Copying: 532/1024 [MB] (31 MBps) [2024-12-10T22:02:21.539Z] Copying: 564/1024 [MB] (31 MBps) [2024-12-10T22:02:22.916Z] Copying: 595/1024 [MB] (31 MBps) [2024-12-10T22:02:23.485Z] Copying: 625/1024 [MB] (30 MBps) [2024-12-10T22:02:24.863Z] Copying: 657/1024 [MB] (31 MBps) [2024-12-10T22:02:25.799Z] Copying: 687/1024 [MB] (30 MBps) [2024-12-10T22:02:26.737Z] Copying: 717/1024 [MB] (29 MBps) [2024-12-10T22:02:27.674Z] Copying: 745/1024 [MB] (28 MBps) [2024-12-10T22:02:28.610Z] Copying: 775/1024 [MB] (29 MBps) [2024-12-10T22:02:29.547Z] Copying: 806/1024 [MB] (31 MBps) [2024-12-10T22:02:30.484Z] Copying: 836/1024 [MB] (30 MBps) [2024-12-10T22:02:31.861Z] Copying: 866/1024 [MB] (30 MBps) [2024-12-10T22:02:32.797Z] Copying: 898/1024 [MB] (31 MBps) [2024-12-10T22:02:33.735Z] Copying: 928/1024 [MB] (30 MBps) [2024-12-10T22:02:34.671Z] Copying: 959/1024 [MB] (30 MBps) [2024-12-10T22:02:35.608Z] Copying: 989/1024 [MB] (30 MBps) [2024-12-10T22:02:35.608Z] Copying: 1020/1024 [MB] (30 MBps) [2024-12-10T22:02:37.512Z] Copying: 1024/1024 [MB] (average 29 MBps)[2024-12-10 22:02:37.154150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.781 [2024-12-10 22:02:37.154271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:29.781 [2024-12-10 22:02:37.154298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:29.781 [2024-12-10 22:02:37.154315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.781 [2024-12-10 22:02:37.154353] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:29.781 [2024-12-10 22:02:37.160735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.781 [2024-12-10 22:02:37.160789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:29.781 [2024-12-10 22:02:37.160805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.364 ms 00:31:29.781 [2024-12-10 22:02:37.160817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.781 [2024-12-10 22:02:37.161078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.781 [2024-12-10 22:02:37.161105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:29.781 [2024-12-10 22:02:37.161119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:31:29.781 [2024-12-10 22:02:37.161130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.781 [2024-12-10 22:02:37.175310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.781 [2024-12-10 22:02:37.175398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:29.781 [2024-12-10 22:02:37.175417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.180 ms 00:31:29.781 [2024-12-10 22:02:37.175429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.781 [2024-12-10 22:02:37.180686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.781 [2024-12-10 22:02:37.180739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:29.781 [2024-12-10 22:02:37.180762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.218 ms 00:31:29.781 [2024-12-10 22:02:37.180773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.781 [2024-12-10 22:02:37.218059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.781 [2024-12-10 22:02:37.218105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:29.781 [2024-12-10 22:02:37.218118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.299 ms 00:31:29.781 [2024-12-10 22:02:37.218128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.781 [2024-12-10 22:02:37.238976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.781 [2024-12-10 22:02:37.239019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:29.781 [2024-12-10 22:02:37.239034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.822 ms 00:31:29.781 [2024-12-10 22:02:37.239044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.781 [2024-12-10 22:02:37.241694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.781 [2024-12-10 22:02:37.241744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:29.781 [2024-12-10 22:02:37.241757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.585 ms 00:31:29.781 [2024-12-10 22:02:37.241791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.781 [2024-12-10 22:02:37.276626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.781 [2024-12-10 22:02:37.276674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:29.781 [2024-12-10 22:02:37.276704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.873 ms 00:31:29.781 [2024-12-10 22:02:37.276714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.781 [2024-12-10 22:02:37.310820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.781 [2024-12-10 22:02:37.310859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:29.781 [2024-12-10 22:02:37.310872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.125 ms 00:31:29.781 [2024-12-10 22:02:37.310882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.781 [2024-12-10 22:02:37.344644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.781 [2024-12-10 22:02:37.344676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:29.781 [2024-12-10 22:02:37.344688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.763 ms 00:31:29.781 [2024-12-10 22:02:37.344698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.782 [2024-12-10 22:02:37.378356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.782 [2024-12-10 22:02:37.378398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:29.782 [2024-12-10 22:02:37.378426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.619 ms 00:31:29.782 [2024-12-10 22:02:37.378437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.782 [2024-12-10 22:02:37.378480] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:29.782 [2024-12-10 22:02:37.378496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:29.782 [2024-12-10 22:02:37.378508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1792 / 261120 wr_cnt: 1 state: open 00:31:29.782 [2024-12-10 22:02:37.378519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.378995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.379005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.379016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.379026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.379036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.379046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.379056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.379081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.379093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.379104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.379115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.379125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.379136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.379147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.379157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.379168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.379179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.379190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.379201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.379211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.379222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.379233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.379243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.379254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.379264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.379275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.379286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.379296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.379307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.379317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.379327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.379338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.379348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.379358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.379369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.379380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.379390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.379401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.379411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.379422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.379433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:29.782 [2024-12-10 22:02:37.379443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:29.783 [2024-12-10 22:02:37.379453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:29.783 [2024-12-10 22:02:37.379464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:29.783 [2024-12-10 22:02:37.379475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:29.783 [2024-12-10 22:02:37.379485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:29.783 [2024-12-10 22:02:37.379496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:29.783 [2024-12-10 22:02:37.379507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:29.783 [2024-12-10 22:02:37.379518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:29.783 [2024-12-10 22:02:37.379529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:29.783 [2024-12-10 22:02:37.379540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:29.783 [2024-12-10 22:02:37.379551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:29.783 [2024-12-10 22:02:37.379562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:29.783 [2024-12-10 22:02:37.379573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:29.783 [2024-12-10 22:02:37.379584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:29.783 [2024-12-10 22:02:37.379602] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:29.783 [2024-12-10 22:02:37.379612] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b02637a8-7852-4645-9f76-e532285a8360 00:31:29.783 [2024-12-10 22:02:37.379624] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262912 00:31:29.783 [2024-12-10 22:02:37.379633] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 180672 00:31:29.783 [2024-12-10 22:02:37.379648] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 178688 00:31:29.783 [2024-12-10 22:02:37.379659] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0111 00:31:29.783 [2024-12-10 22:02:37.379669] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:29.783 [2024-12-10 22:02:37.379690] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:29.783 [2024-12-10 22:02:37.379701] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:29.783 [2024-12-10 22:02:37.379710] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:29.783 [2024-12-10 22:02:37.379719] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:29.783 [2024-12-10 22:02:37.379729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.783 [2024-12-10 22:02:37.379740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:29.783 [2024-12-10 22:02:37.379750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.253 ms 00:31:29.783 [2024-12-10 22:02:37.379760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.783 [2024-12-10 22:02:37.398916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.783 [2024-12-10 22:02:37.398952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:29.783 [2024-12-10 22:02:37.398964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.151 ms 00:31:29.783 [2024-12-10 22:02:37.398974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.783 [2024-12-10 22:02:37.399611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.783 [2024-12-10 22:02:37.399634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:29.783 [2024-12-10 22:02:37.399645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.602 ms 00:31:29.783 [2024-12-10 22:02:37.399655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.783 [2024-12-10 22:02:37.450007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:29.783 [2024-12-10 22:02:37.450043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:29.783 [2024-12-10 22:02:37.450077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:29.783 [2024-12-10 22:02:37.450088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.783 [2024-12-10 22:02:37.450147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:29.783 [2024-12-10 22:02:37.450159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:29.783 [2024-12-10 22:02:37.450170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:29.783 [2024-12-10 22:02:37.450181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.783 [2024-12-10 22:02:37.450250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:29.783 [2024-12-10 22:02:37.450263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:29.783 [2024-12-10 22:02:37.450274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:29.783 [2024-12-10 22:02:37.450285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.783 [2024-12-10 22:02:37.450302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:29.783 [2024-12-10 22:02:37.450313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:29.783 [2024-12-10 22:02:37.450323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:29.783 [2024-12-10 22:02:37.450333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:30.042 [2024-12-10 22:02:37.574214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:30.042 [2024-12-10 22:02:37.574286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:30.042 [2024-12-10 22:02:37.574302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:30.042 [2024-12-10 22:02:37.574313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:30.042 [2024-12-10 22:02:37.671576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:30.042 [2024-12-10 22:02:37.671631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:30.042 [2024-12-10 22:02:37.671646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:30.042 [2024-12-10 22:02:37.671657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:30.042 [2024-12-10 22:02:37.671771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:30.042 [2024-12-10 22:02:37.671789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:30.042 [2024-12-10 22:02:37.671800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:30.042 [2024-12-10 22:02:37.671811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:30.042 [2024-12-10 22:02:37.671851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:30.042 [2024-12-10 22:02:37.671863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:30.042 [2024-12-10 22:02:37.671873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:30.042 [2024-12-10 22:02:37.671883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:30.042 [2024-12-10 22:02:37.672022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:30.042 [2024-12-10 22:02:37.672037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:30.042 [2024-12-10 22:02:37.672053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:30.042 [2024-12-10 22:02:37.672063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:30.042 [2024-12-10 22:02:37.672121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:30.042 [2024-12-10 22:02:37.672135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:30.042 [2024-12-10 22:02:37.672146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:30.042 [2024-12-10 22:02:37.672156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:30.042 [2024-12-10 22:02:37.672198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:30.042 [2024-12-10 22:02:37.672210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:30.042 [2024-12-10 22:02:37.672226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:30.042 [2024-12-10 22:02:37.672236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:30.042 [2024-12-10 22:02:37.672282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:30.042 [2024-12-10 22:02:37.672294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:30.042 [2024-12-10 22:02:37.672305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:30.042 [2024-12-10 22:02:37.672315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:30.042 [2024-12-10 22:02:37.672477] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 519.255 ms, result 0 00:31:30.978 00:31:30.978 00:31:31.236 22:02:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:33.140 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:33.140 22:02:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:33.140 [2024-12-10 22:02:40.463181] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:31:33.140 [2024-12-10 22:02:40.463324] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84691 ] 00:31:33.140 [2024-12-10 22:02:40.645561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:33.140 [2024-12-10 22:02:40.762705] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:33.709 [2024-12-10 22:02:41.132510] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:33.709 [2024-12-10 22:02:41.132580] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:33.709 [2024-12-10 22:02:41.296289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.709 [2024-12-10 22:02:41.296342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:33.709 [2024-12-10 22:02:41.296358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:33.709 [2024-12-10 22:02:41.296369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.709 [2024-12-10 22:02:41.296431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.709 [2024-12-10 22:02:41.296447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:33.709 [2024-12-10 22:02:41.296458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:31:33.709 [2024-12-10 22:02:41.296469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.709 [2024-12-10 22:02:41.296489] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:33.709 [2024-12-10 22:02:41.297434] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:33.709 [2024-12-10 22:02:41.297464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.709 [2024-12-10 22:02:41.297475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:33.709 [2024-12-10 22:02:41.297487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.979 ms 00:31:33.709 [2024-12-10 22:02:41.297496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.709 [2024-12-10 22:02:41.299269] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:33.709 [2024-12-10 22:02:41.318154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.709 [2024-12-10 22:02:41.318192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:33.709 [2024-12-10 22:02:41.318222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.917 ms 00:31:33.709 [2024-12-10 22:02:41.318233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.709 [2024-12-10 22:02:41.318301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.709 [2024-12-10 22:02:41.318314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:33.709 [2024-12-10 22:02:41.318326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:31:33.709 [2024-12-10 22:02:41.318336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.709 [2024-12-10 22:02:41.326501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.709 [2024-12-10 22:02:41.326533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:33.709 [2024-12-10 22:02:41.326560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.107 ms 00:31:33.709 [2024-12-10 22:02:41.326576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.709 [2024-12-10 22:02:41.326662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.709 [2024-12-10 22:02:41.326674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:33.709 [2024-12-10 22:02:41.326685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:31:33.709 [2024-12-10 22:02:41.326694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.709 [2024-12-10 22:02:41.326735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.709 [2024-12-10 22:02:41.326747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:33.709 [2024-12-10 22:02:41.326757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:33.709 [2024-12-10 22:02:41.326767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.709 [2024-12-10 22:02:41.326796] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:33.709 [2024-12-10 22:02:41.331844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.709 [2024-12-10 22:02:41.331876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:33.709 [2024-12-10 22:02:41.331893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.063 ms 00:31:33.709 [2024-12-10 22:02:41.331904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.709 [2024-12-10 22:02:41.331938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.709 [2024-12-10 22:02:41.331949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:33.709 [2024-12-10 22:02:41.331960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:33.709 [2024-12-10 22:02:41.331970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.709 [2024-12-10 22:02:41.332020] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:33.709 [2024-12-10 22:02:41.332060] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:33.709 [2024-12-10 22:02:41.332097] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:33.709 [2024-12-10 22:02:41.332120] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:33.709 [2024-12-10 22:02:41.332226] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:33.709 [2024-12-10 22:02:41.332241] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:33.709 [2024-12-10 22:02:41.332254] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:33.709 [2024-12-10 22:02:41.332267] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:33.709 [2024-12-10 22:02:41.332281] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:33.709 [2024-12-10 22:02:41.332292] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:33.709 [2024-12-10 22:02:41.332302] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:33.709 [2024-12-10 22:02:41.332313] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:33.709 [2024-12-10 22:02:41.332326] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:33.709 [2024-12-10 22:02:41.332337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.709 [2024-12-10 22:02:41.332348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:33.709 [2024-12-10 22:02:41.332359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:31:33.709 [2024-12-10 22:02:41.332368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.709 [2024-12-10 22:02:41.332441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.709 [2024-12-10 22:02:41.332453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:33.709 [2024-12-10 22:02:41.332463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:31:33.709 [2024-12-10 22:02:41.332473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.709 [2024-12-10 22:02:41.332562] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:33.709 [2024-12-10 22:02:41.332576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:33.709 [2024-12-10 22:02:41.332587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:33.709 [2024-12-10 22:02:41.332598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:33.709 [2024-12-10 22:02:41.332608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:33.709 [2024-12-10 22:02:41.332617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:33.709 [2024-12-10 22:02:41.332627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:33.710 [2024-12-10 22:02:41.332636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:33.710 [2024-12-10 22:02:41.332645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:33.710 [2024-12-10 22:02:41.332655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:33.710 [2024-12-10 22:02:41.332666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:33.710 [2024-12-10 22:02:41.332677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:33.710 [2024-12-10 22:02:41.332686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:33.710 [2024-12-10 22:02:41.332707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:33.710 [2024-12-10 22:02:41.332717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:33.710 [2024-12-10 22:02:41.332726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:33.710 [2024-12-10 22:02:41.332736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:33.710 [2024-12-10 22:02:41.332746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:33.710 [2024-12-10 22:02:41.332755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:33.710 [2024-12-10 22:02:41.332764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:33.710 [2024-12-10 22:02:41.332774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:33.710 [2024-12-10 22:02:41.332784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:33.710 [2024-12-10 22:02:41.332793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:33.710 [2024-12-10 22:02:41.332803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:33.710 [2024-12-10 22:02:41.332812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:33.710 [2024-12-10 22:02:41.332822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:33.710 [2024-12-10 22:02:41.332832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:33.710 [2024-12-10 22:02:41.332841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:33.710 [2024-12-10 22:02:41.332850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:33.710 [2024-12-10 22:02:41.332860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:33.710 [2024-12-10 22:02:41.332869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:33.710 [2024-12-10 22:02:41.332878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:33.710 [2024-12-10 22:02:41.332888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:33.710 [2024-12-10 22:02:41.332897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:33.710 [2024-12-10 22:02:41.332906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:33.710 [2024-12-10 22:02:41.332915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:33.710 [2024-12-10 22:02:41.332924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:33.710 [2024-12-10 22:02:41.332934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:33.710 [2024-12-10 22:02:41.332943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:33.710 [2024-12-10 22:02:41.332951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:33.710 [2024-12-10 22:02:41.332960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:33.710 [2024-12-10 22:02:41.332969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:33.710 [2024-12-10 22:02:41.332980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:33.710 [2024-12-10 22:02:41.332989] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:33.710 [2024-12-10 22:02:41.332999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:33.710 [2024-12-10 22:02:41.333009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:33.710 [2024-12-10 22:02:41.333018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:33.710 [2024-12-10 22:02:41.333028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:33.710 [2024-12-10 22:02:41.333038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:33.710 [2024-12-10 22:02:41.333047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:33.710 [2024-12-10 22:02:41.333057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:33.710 [2024-12-10 22:02:41.333076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:33.710 [2024-12-10 22:02:41.333086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:33.710 [2024-12-10 22:02:41.333097] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:33.710 [2024-12-10 22:02:41.333109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:33.710 [2024-12-10 22:02:41.333126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:33.710 [2024-12-10 22:02:41.333136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:33.710 [2024-12-10 22:02:41.333147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:33.710 [2024-12-10 22:02:41.333158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:33.710 [2024-12-10 22:02:41.333169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:33.710 [2024-12-10 22:02:41.333179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:33.710 [2024-12-10 22:02:41.333189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:33.710 [2024-12-10 22:02:41.333200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:33.710 [2024-12-10 22:02:41.333210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:33.710 [2024-12-10 22:02:41.333221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:33.710 [2024-12-10 22:02:41.333231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:33.710 [2024-12-10 22:02:41.333242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:33.710 [2024-12-10 22:02:41.333252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:33.710 [2024-12-10 22:02:41.333263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:33.710 [2024-12-10 22:02:41.333273] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:33.710 [2024-12-10 22:02:41.333284] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:33.710 [2024-12-10 22:02:41.333295] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:33.710 [2024-12-10 22:02:41.333306] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:33.710 [2024-12-10 22:02:41.333318] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:33.710 [2024-12-10 22:02:41.333329] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:33.710 [2024-12-10 22:02:41.333341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.710 [2024-12-10 22:02:41.333351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:33.710 [2024-12-10 22:02:41.333361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.834 ms 00:31:33.710 [2024-12-10 22:02:41.333371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.710 [2024-12-10 22:02:41.374038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.710 [2024-12-10 22:02:41.374100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:33.710 [2024-12-10 22:02:41.374115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.684 ms 00:31:33.710 [2024-12-10 22:02:41.374147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.710 [2024-12-10 22:02:41.374226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.710 [2024-12-10 22:02:41.374238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:33.710 [2024-12-10 22:02:41.374249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:31:33.710 [2024-12-10 22:02:41.374260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.970 [2024-12-10 22:02:41.440150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.970 [2024-12-10 22:02:41.440189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:33.970 [2024-12-10 22:02:41.440220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.937 ms 00:31:33.970 [2024-12-10 22:02:41.440231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.970 [2024-12-10 22:02:41.440274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.970 [2024-12-10 22:02:41.440287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:33.970 [2024-12-10 22:02:41.440303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:33.970 [2024-12-10 22:02:41.440315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.970 [2024-12-10 22:02:41.440824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.970 [2024-12-10 22:02:41.440848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:33.970 [2024-12-10 22:02:41.440860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:31:33.970 [2024-12-10 22:02:41.440870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.970 [2024-12-10 22:02:41.440997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.970 [2024-12-10 22:02:41.441011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:33.970 [2024-12-10 22:02:41.441026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:31:33.970 [2024-12-10 22:02:41.441037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.970 [2024-12-10 22:02:41.458131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.970 [2024-12-10 22:02:41.458167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:33.970 [2024-12-10 22:02:41.458182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.077 ms 00:31:33.970 [2024-12-10 22:02:41.458192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.970 [2024-12-10 22:02:41.477075] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:33.970 [2024-12-10 22:02:41.477112] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:33.970 [2024-12-10 22:02:41.477143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.970 [2024-12-10 22:02:41.477155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:33.970 [2024-12-10 22:02:41.477166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.856 ms 00:31:33.970 [2024-12-10 22:02:41.477177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.970 [2024-12-10 22:02:41.505418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.970 [2024-12-10 22:02:41.505458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:33.970 [2024-12-10 22:02:41.505472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.243 ms 00:31:33.970 [2024-12-10 22:02:41.505483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.970 [2024-12-10 22:02:41.522527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.970 [2024-12-10 22:02:41.522561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:33.970 [2024-12-10 22:02:41.522573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.998 ms 00:31:33.970 [2024-12-10 22:02:41.522583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.970 [2024-12-10 22:02:41.539560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.970 [2024-12-10 22:02:41.539608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:33.970 [2024-12-10 22:02:41.539621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.950 ms 00:31:33.970 [2024-12-10 22:02:41.539630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.970 [2024-12-10 22:02:41.540432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.970 [2024-12-10 22:02:41.540462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:33.970 [2024-12-10 22:02:41.540479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:31:33.970 [2024-12-10 22:02:41.540489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.970 [2024-12-10 22:02:41.624426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.970 [2024-12-10 22:02:41.624489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:33.970 [2024-12-10 22:02:41.624512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.050 ms 00:31:33.970 [2024-12-10 22:02:41.624539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.970 [2024-12-10 22:02:41.634510] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:33.970 [2024-12-10 22:02:41.636788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.970 [2024-12-10 22:02:41.636816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:33.970 [2024-12-10 22:02:41.636845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.219 ms 00:31:33.970 [2024-12-10 22:02:41.636857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.970 [2024-12-10 22:02:41.636939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.970 [2024-12-10 22:02:41.636954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:33.970 [2024-12-10 22:02:41.636965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:33.970 [2024-12-10 22:02:41.636980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.970 [2024-12-10 22:02:41.638309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.970 [2024-12-10 22:02:41.638352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:33.970 [2024-12-10 22:02:41.638366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.286 ms 00:31:33.970 [2024-12-10 22:02:41.638377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.970 [2024-12-10 22:02:41.638402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.970 [2024-12-10 22:02:41.638415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:33.970 [2024-12-10 22:02:41.638427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:33.970 [2024-12-10 22:02:41.638437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.970 [2024-12-10 22:02:41.638491] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:33.970 [2024-12-10 22:02:41.638504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.970 [2024-12-10 22:02:41.638514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:33.970 [2024-12-10 22:02:41.638526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:31:33.970 [2024-12-10 22:02:41.638537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.970 [2024-12-10 22:02:41.674951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.970 [2024-12-10 22:02:41.674995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:33.970 [2024-12-10 22:02:41.675015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.452 ms 00:31:33.970 [2024-12-10 22:02:41.675027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.970 [2024-12-10 22:02:41.675112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.970 [2024-12-10 22:02:41.675126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:33.970 [2024-12-10 22:02:41.675147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:31:33.970 [2024-12-10 22:02:41.675158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.970 [2024-12-10 22:02:41.676400] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 380.215 ms, result 0 00:31:35.424  [2024-12-10T22:02:44.092Z] Copying: 25/1024 [MB] (25 MBps) [2024-12-10T22:02:45.028Z] Copying: 49/1024 [MB] (24 MBps) [2024-12-10T22:02:45.972Z] Copying: 73/1024 [MB] (23 MBps) [2024-12-10T22:02:46.908Z] Copying: 97/1024 [MB] (24 MBps) [2024-12-10T22:02:48.285Z] Copying: 122/1024 [MB] (24 MBps) [2024-12-10T22:02:49.221Z] Copying: 147/1024 [MB] (24 MBps) [2024-12-10T22:02:50.158Z] Copying: 170/1024 [MB] (23 MBps) [2024-12-10T22:02:51.097Z] Copying: 195/1024 [MB] (25 MBps) [2024-12-10T22:02:52.034Z] Copying: 220/1024 [MB] (25 MBps) [2024-12-10T22:02:52.971Z] Copying: 246/1024 [MB] (25 MBps) [2024-12-10T22:02:53.908Z] Copying: 271/1024 [MB] (25 MBps) [2024-12-10T22:02:54.971Z] Copying: 296/1024 [MB] (25 MBps) [2024-12-10T22:02:55.908Z] Copying: 322/1024 [MB] (25 MBps) [2024-12-10T22:02:57.285Z] Copying: 346/1024 [MB] (24 MBps) [2024-12-10T22:02:58.222Z] Copying: 371/1024 [MB] (24 MBps) [2024-12-10T22:02:59.159Z] Copying: 396/1024 [MB] (24 MBps) [2024-12-10T22:03:00.096Z] Copying: 421/1024 [MB] (25 MBps) [2024-12-10T22:03:01.033Z] Copying: 445/1024 [MB] (23 MBps) [2024-12-10T22:03:01.970Z] Copying: 470/1024 [MB] (24 MBps) [2024-12-10T22:03:02.906Z] Copying: 494/1024 [MB] (24 MBps) [2024-12-10T22:03:04.283Z] Copying: 518/1024 [MB] (24 MBps) [2024-12-10T22:03:04.851Z] Copying: 544/1024 [MB] (25 MBps) [2024-12-10T22:03:06.228Z] Copying: 569/1024 [MB] (25 MBps) [2024-12-10T22:03:07.166Z] Copying: 594/1024 [MB] (25 MBps) [2024-12-10T22:03:08.102Z] Copying: 620/1024 [MB] (25 MBps) [2024-12-10T22:03:09.038Z] Copying: 645/1024 [MB] (24 MBps) [2024-12-10T22:03:09.975Z] Copying: 670/1024 [MB] (24 MBps) [2024-12-10T22:03:10.911Z] Copying: 694/1024 [MB] (24 MBps) [2024-12-10T22:03:11.849Z] Copying: 719/1024 [MB] (24 MBps) [2024-12-10T22:03:13.227Z] Copying: 744/1024 [MB] (24 MBps) [2024-12-10T22:03:14.164Z] Copying: 768/1024 [MB] (24 MBps) [2024-12-10T22:03:15.101Z] Copying: 794/1024 [MB] (25 MBps) [2024-12-10T22:03:16.037Z] Copying: 819/1024 [MB] (25 MBps) [2024-12-10T22:03:16.980Z] Copying: 844/1024 [MB] (24 MBps) [2024-12-10T22:03:17.917Z] Copying: 869/1024 [MB] (25 MBps) [2024-12-10T22:03:18.853Z] Copying: 895/1024 [MB] (25 MBps) [2024-12-10T22:03:20.230Z] Copying: 920/1024 [MB] (25 MBps) [2024-12-10T22:03:21.166Z] Copying: 945/1024 [MB] (25 MBps) [2024-12-10T22:03:22.103Z] Copying: 969/1024 [MB] (23 MBps) [2024-12-10T22:03:23.040Z] Copying: 994/1024 [MB] (25 MBps) [2024-12-10T22:03:23.040Z] Copying: 1020/1024 [MB] (25 MBps) [2024-12-10T22:03:23.299Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-12-10 22:03:23.150473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.568 [2024-12-10 22:03:23.150591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:15.568 [2024-12-10 22:03:23.150900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:15.568 [2024-12-10 22:03:23.150926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.568 [2024-12-10 22:03:23.150979] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:15.568 [2024-12-10 22:03:23.158951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.568 [2024-12-10 22:03:23.159012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:15.568 [2024-12-10 22:03:23.159031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.947 ms 00:32:15.568 [2024-12-10 22:03:23.159058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.568 [2024-12-10 22:03:23.159390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.568 [2024-12-10 22:03:23.159424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:15.568 [2024-12-10 22:03:23.159442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:32:15.568 [2024-12-10 22:03:23.159458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.568 [2024-12-10 22:03:23.163881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.568 [2024-12-10 22:03:23.163921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:15.568 [2024-12-10 22:03:23.163939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.406 ms 00:32:15.568 [2024-12-10 22:03:23.163962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.568 [2024-12-10 22:03:23.170044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.568 [2024-12-10 22:03:23.170100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:15.568 [2024-12-10 22:03:23.170114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.062 ms 00:32:15.568 [2024-12-10 22:03:23.170125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.568 [2024-12-10 22:03:23.205847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.568 [2024-12-10 22:03:23.205890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:15.568 [2024-12-10 22:03:23.205904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.709 ms 00:32:15.568 [2024-12-10 22:03:23.205915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.568 [2024-12-10 22:03:23.226263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.568 [2024-12-10 22:03:23.226305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:15.568 [2024-12-10 22:03:23.226320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.324 ms 00:32:15.568 [2024-12-10 22:03:23.226331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.568 [2024-12-10 22:03:23.228725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.568 [2024-12-10 22:03:23.228765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:15.568 [2024-12-10 22:03:23.228778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.332 ms 00:32:15.568 [2024-12-10 22:03:23.228789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.568 [2024-12-10 22:03:23.264327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.568 [2024-12-10 22:03:23.264366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:15.568 [2024-12-10 22:03:23.264380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.577 ms 00:32:15.568 [2024-12-10 22:03:23.264390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.828 [2024-12-10 22:03:23.299794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.828 [2024-12-10 22:03:23.299833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:15.828 [2024-12-10 22:03:23.299847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.407 ms 00:32:15.828 [2024-12-10 22:03:23.299857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.828 [2024-12-10 22:03:23.334313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.828 [2024-12-10 22:03:23.334345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:15.828 [2024-12-10 22:03:23.334358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.459 ms 00:32:15.828 [2024-12-10 22:03:23.334367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.828 [2024-12-10 22:03:23.368523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.828 [2024-12-10 22:03:23.368560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:15.828 [2024-12-10 22:03:23.368589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.113 ms 00:32:15.828 [2024-12-10 22:03:23.368599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.828 [2024-12-10 22:03:23.368634] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:15.828 [2024-12-10 22:03:23.368657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:15.828 [2024-12-10 22:03:23.368674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1792 / 261120 wr_cnt: 1 state: open 00:32:15.828 [2024-12-10 22:03:23.368685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:15.828 [2024-12-10 22:03:23.368697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:15.828 [2024-12-10 22:03:23.368708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:15.828 [2024-12-10 22:03:23.368718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:15.828 [2024-12-10 22:03:23.368729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:15.828 [2024-12-10 22:03:23.368739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:15.828 [2024-12-10 22:03:23.368750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:15.828 [2024-12-10 22:03:23.368761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:15.828 [2024-12-10 22:03:23.368771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:15.828 [2024-12-10 22:03:23.368781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:15.828 [2024-12-10 22:03:23.368792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:15.828 [2024-12-10 22:03:23.368802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:15.828 [2024-12-10 22:03:23.368812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:15.828 [2024-12-10 22:03:23.368822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:15.828 [2024-12-10 22:03:23.368832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.368842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.368853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.368864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.368890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.368901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.368911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.368921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.368932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.368944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.368955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.368966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.368977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.368987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.368998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:15.829 [2024-12-10 22:03:23.369777] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:15.829 [2024-12-10 22:03:23.369788] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b02637a8-7852-4645-9f76-e532285a8360 00:32:15.829 [2024-12-10 22:03:23.369799] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262912 00:32:15.829 [2024-12-10 22:03:23.369809] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:15.829 [2024-12-10 22:03:23.369819] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:15.829 [2024-12-10 22:03:23.369831] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:15.829 [2024-12-10 22:03:23.369851] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:15.829 [2024-12-10 22:03:23.369862] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:15.830 [2024-12-10 22:03:23.369873] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:15.830 [2024-12-10 22:03:23.369882] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:15.830 [2024-12-10 22:03:23.369891] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:15.830 [2024-12-10 22:03:23.369901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.830 [2024-12-10 22:03:23.369912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:15.830 [2024-12-10 22:03:23.369923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.269 ms 00:32:15.830 [2024-12-10 22:03:23.369938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.830 [2024-12-10 22:03:23.388957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.830 [2024-12-10 22:03:23.388988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:15.830 [2024-12-10 22:03:23.389000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.015 ms 00:32:15.830 [2024-12-10 22:03:23.389009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.830 [2024-12-10 22:03:23.389642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.830 [2024-12-10 22:03:23.389670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:15.830 [2024-12-10 22:03:23.389682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.598 ms 00:32:15.830 [2024-12-10 22:03:23.389693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.830 [2024-12-10 22:03:23.438974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:15.830 [2024-12-10 22:03:23.439009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:15.830 [2024-12-10 22:03:23.439038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:15.830 [2024-12-10 22:03:23.439049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.830 [2024-12-10 22:03:23.439113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:15.830 [2024-12-10 22:03:23.439129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:15.830 [2024-12-10 22:03:23.439141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:15.830 [2024-12-10 22:03:23.439150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.830 [2024-12-10 22:03:23.439212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:15.830 [2024-12-10 22:03:23.439225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:15.830 [2024-12-10 22:03:23.439235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:15.830 [2024-12-10 22:03:23.439245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.830 [2024-12-10 22:03:23.439262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:15.830 [2024-12-10 22:03:23.439273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:15.830 [2024-12-10 22:03:23.439288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:15.830 [2024-12-10 22:03:23.439298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.089 [2024-12-10 22:03:23.558641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:16.089 [2024-12-10 22:03:23.558693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:16.089 [2024-12-10 22:03:23.558707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:16.089 [2024-12-10 22:03:23.558718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.089 [2024-12-10 22:03:23.654801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:16.089 [2024-12-10 22:03:23.654858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:16.089 [2024-12-10 22:03:23.654889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:16.089 [2024-12-10 22:03:23.654901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.089 [2024-12-10 22:03:23.655004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:16.089 [2024-12-10 22:03:23.655016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:16.089 [2024-12-10 22:03:23.655028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:16.089 [2024-12-10 22:03:23.655039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.089 [2024-12-10 22:03:23.655094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:16.089 [2024-12-10 22:03:23.655107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:16.089 [2024-12-10 22:03:23.655118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:16.089 [2024-12-10 22:03:23.655133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.089 [2024-12-10 22:03:23.655276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:16.089 [2024-12-10 22:03:23.655290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:16.089 [2024-12-10 22:03:23.655302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:16.089 [2024-12-10 22:03:23.655313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.089 [2024-12-10 22:03:23.655355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:16.089 [2024-12-10 22:03:23.655367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:16.089 [2024-12-10 22:03:23.655379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:16.089 [2024-12-10 22:03:23.655389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.089 [2024-12-10 22:03:23.655437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:16.089 [2024-12-10 22:03:23.655449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:16.089 [2024-12-10 22:03:23.655460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:16.089 [2024-12-10 22:03:23.655471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.089 [2024-12-10 22:03:23.655517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:16.089 [2024-12-10 22:03:23.655529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:16.089 [2024-12-10 22:03:23.655540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:16.089 [2024-12-10 22:03:23.655554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:16.089 [2024-12-10 22:03:23.655715] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 506.091 ms, result 0 00:32:17.026 00:32:17.026 00:32:17.026 22:03:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:32:18.930 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:32:18.931 22:03:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:32:18.931 22:03:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:32:18.931 22:03:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:18.931 22:03:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:18.931 22:03:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:32:19.190 22:03:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:19.190 22:03:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:32:19.190 22:03:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 82732 00:32:19.190 22:03:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 82732 ']' 00:32:19.190 22:03:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 82732 00:32:19.190 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (82732) - No such process 00:32:19.190 Process with pid 82732 is not found 00:32:19.190 22:03:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 82732 is not found' 00:32:19.190 22:03:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:32:19.449 22:03:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:32:19.449 Remove shared memory files 00:32:19.449 22:03:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:19.449 22:03:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:32:19.449 22:03:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:32:19.449 22:03:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:32:19.449 22:03:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:19.449 22:03:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:32:19.449 ************************************ 00:32:19.449 END TEST ftl_dirty_shutdown 00:32:19.449 ************************************ 00:32:19.449 00:32:19.449 real 3m56.472s 00:32:19.449 user 4m29.069s 00:32:19.449 sys 0m43.448s 00:32:19.449 22:03:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:19.449 22:03:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:19.449 22:03:27 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:32:19.449 22:03:27 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:19.449 22:03:27 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:19.449 22:03:27 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:19.449 ************************************ 00:32:19.449 START TEST ftl_upgrade_shutdown 00:32:19.449 ************************************ 00:32:19.449 22:03:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:32:19.708 * Looking for test storage... 00:32:19.708 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:32:19.708 22:03:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:19.708 22:03:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:32:19.708 22:03:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:19.708 22:03:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:19.708 22:03:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:19.708 22:03:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:19.708 22:03:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:19.708 22:03:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:32:19.708 22:03:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:32:19.708 22:03:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:32:19.708 22:03:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:32:19.708 22:03:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:32:19.708 22:03:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:32:19.708 22:03:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:32:19.708 22:03:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:19.708 22:03:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:32:19.708 22:03:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:32:19.708 22:03:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:19.708 22:03:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:19.708 22:03:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:32:19.708 22:03:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:32:19.708 22:03:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:19.708 22:03:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:32:19.708 22:03:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:32:19.708 22:03:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:32:19.708 22:03:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:32:19.708 22:03:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:19.708 22:03:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:19.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.709 --rc genhtml_branch_coverage=1 00:32:19.709 --rc genhtml_function_coverage=1 00:32:19.709 --rc genhtml_legend=1 00:32:19.709 --rc geninfo_all_blocks=1 00:32:19.709 --rc geninfo_unexecuted_blocks=1 00:32:19.709 00:32:19.709 ' 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:19.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.709 --rc genhtml_branch_coverage=1 00:32:19.709 --rc genhtml_function_coverage=1 00:32:19.709 --rc genhtml_legend=1 00:32:19.709 --rc geninfo_all_blocks=1 00:32:19.709 --rc geninfo_unexecuted_blocks=1 00:32:19.709 00:32:19.709 ' 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:19.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.709 --rc genhtml_branch_coverage=1 00:32:19.709 --rc genhtml_function_coverage=1 00:32:19.709 --rc genhtml_legend=1 00:32:19.709 --rc geninfo_all_blocks=1 00:32:19.709 --rc geninfo_unexecuted_blocks=1 00:32:19.709 00:32:19.709 ' 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:19.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.709 --rc genhtml_branch_coverage=1 00:32:19.709 --rc genhtml_function_coverage=1 00:32:19.709 --rc genhtml_legend=1 00:32:19.709 --rc geninfo_all_blocks=1 00:32:19.709 --rc geninfo_unexecuted_blocks=1 00:32:19.709 00:32:19.709 ' 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85229 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85229 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 85229 ']' 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:19.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:19.709 22:03:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:19.968 [2024-12-10 22:03:27.458419] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:32:19.968 [2024-12-10 22:03:27.458568] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85229 ] 00:32:19.969 [2024-12-10 22:03:27.641186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:20.227 [2024-12-10 22:03:27.753765] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:21.166 22:03:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:21.166 22:03:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:32:21.166 22:03:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:21.166 22:03:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:32:21.166 22:03:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:32:21.166 22:03:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:21.166 22:03:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:32:21.166 22:03:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:21.166 22:03:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:32:21.166 22:03:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:21.166 22:03:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:32:21.166 22:03:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:21.166 22:03:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:32:21.166 22:03:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:21.166 22:03:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:32:21.166 22:03:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:21.166 22:03:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:32:21.166 22:03:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:32:21.166 22:03:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:32:21.166 22:03:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:32:21.166 22:03:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:32:21.166 22:03:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:32:21.166 22:03:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:32:21.426 22:03:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:32:21.426 22:03:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:32:21.426 22:03:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:32:21.426 22:03:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:32:21.426 22:03:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:21.426 22:03:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:32:21.426 22:03:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:32:21.426 22:03:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:32:21.426 22:03:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:21.426 { 00:32:21.426 "name": "basen1", 00:32:21.426 "aliases": [ 00:32:21.426 "08e75968-b678-4bb5-b82c-7a1720efce85" 00:32:21.426 ], 00:32:21.426 "product_name": "NVMe disk", 00:32:21.426 "block_size": 4096, 00:32:21.426 "num_blocks": 1310720, 00:32:21.426 "uuid": "08e75968-b678-4bb5-b82c-7a1720efce85", 00:32:21.426 "numa_id": -1, 00:32:21.426 "assigned_rate_limits": { 00:32:21.426 "rw_ios_per_sec": 0, 00:32:21.426 "rw_mbytes_per_sec": 0, 00:32:21.426 "r_mbytes_per_sec": 0, 00:32:21.426 "w_mbytes_per_sec": 0 00:32:21.426 }, 00:32:21.426 "claimed": true, 00:32:21.426 "claim_type": "read_many_write_one", 00:32:21.426 "zoned": false, 00:32:21.426 "supported_io_types": { 00:32:21.426 "read": true, 00:32:21.426 "write": true, 00:32:21.426 "unmap": true, 00:32:21.426 "flush": true, 00:32:21.426 "reset": true, 00:32:21.426 "nvme_admin": true, 00:32:21.426 "nvme_io": true, 00:32:21.426 "nvme_io_md": false, 00:32:21.426 "write_zeroes": true, 00:32:21.426 "zcopy": false, 00:32:21.426 "get_zone_info": false, 00:32:21.426 "zone_management": false, 00:32:21.426 "zone_append": false, 00:32:21.426 "compare": true, 00:32:21.426 "compare_and_write": false, 00:32:21.426 "abort": true, 00:32:21.426 "seek_hole": false, 00:32:21.426 "seek_data": false, 00:32:21.426 "copy": true, 00:32:21.426 "nvme_iov_md": false 00:32:21.426 }, 00:32:21.426 "driver_specific": { 00:32:21.426 "nvme": [ 00:32:21.426 { 00:32:21.426 "pci_address": "0000:00:11.0", 00:32:21.426 "trid": { 00:32:21.426 "trtype": "PCIe", 00:32:21.426 "traddr": "0000:00:11.0" 00:32:21.426 }, 00:32:21.426 "ctrlr_data": { 00:32:21.426 "cntlid": 0, 00:32:21.426 "vendor_id": "0x1b36", 00:32:21.426 "model_number": "QEMU NVMe Ctrl", 00:32:21.426 "serial_number": "12341", 00:32:21.426 "firmware_revision": "8.0.0", 00:32:21.426 "subnqn": "nqn.2019-08.org.qemu:12341", 00:32:21.426 "oacs": { 00:32:21.426 "security": 0, 00:32:21.426 "format": 1, 00:32:21.426 "firmware": 0, 00:32:21.426 "ns_manage": 1 00:32:21.426 }, 00:32:21.426 "multi_ctrlr": false, 00:32:21.426 "ana_reporting": false 00:32:21.426 }, 00:32:21.426 "vs": { 00:32:21.426 "nvme_version": "1.4" 00:32:21.426 }, 00:32:21.426 "ns_data": { 00:32:21.426 "id": 1, 00:32:21.426 "can_share": false 00:32:21.426 } 00:32:21.426 } 00:32:21.426 ], 00:32:21.426 "mp_policy": "active_passive" 00:32:21.426 } 00:32:21.426 } 00:32:21.426 ]' 00:32:21.426 22:03:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:21.426 22:03:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:32:21.426 22:03:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:21.688 22:03:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:32:21.688 22:03:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:32:21.688 22:03:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:32:21.688 22:03:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:32:21.688 22:03:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:32:21.688 22:03:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:32:21.688 22:03:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:21.688 22:03:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:21.688 22:03:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=701344bb-14bd-4b7d-bb0c-da70ae979c46 00:32:21.688 22:03:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:32:21.688 22:03:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 701344bb-14bd-4b7d-bb0c-da70ae979c46 00:32:21.978 22:03:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:32:22.258 22:03:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=ab9b8b60-a45a-4060-9856-acd3cdfa2061 00:32:22.258 22:03:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u ab9b8b60-a45a-4060-9856-acd3cdfa2061 00:32:22.521 22:03:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=3aa03581-b449-4066-b599-b5fbc256bdcd 00:32:22.521 22:03:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 3aa03581-b449-4066-b599-b5fbc256bdcd ]] 00:32:22.521 22:03:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 3aa03581-b449-4066-b599-b5fbc256bdcd 5120 00:32:22.521 22:03:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:32:22.521 22:03:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:32:22.521 22:03:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=3aa03581-b449-4066-b599-b5fbc256bdcd 00:32:22.521 22:03:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:32:22.521 22:03:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 3aa03581-b449-4066-b599-b5fbc256bdcd 00:32:22.521 22:03:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=3aa03581-b449-4066-b599-b5fbc256bdcd 00:32:22.521 22:03:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:22.521 22:03:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:32:22.521 22:03:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:32:22.521 22:03:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3aa03581-b449-4066-b599-b5fbc256bdcd 00:32:22.521 22:03:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:22.521 { 00:32:22.521 "name": "3aa03581-b449-4066-b599-b5fbc256bdcd", 00:32:22.521 "aliases": [ 00:32:22.521 "lvs/basen1p0" 00:32:22.521 ], 00:32:22.521 "product_name": "Logical Volume", 00:32:22.521 "block_size": 4096, 00:32:22.521 "num_blocks": 5242880, 00:32:22.521 "uuid": "3aa03581-b449-4066-b599-b5fbc256bdcd", 00:32:22.521 "assigned_rate_limits": { 00:32:22.521 "rw_ios_per_sec": 0, 00:32:22.521 "rw_mbytes_per_sec": 0, 00:32:22.521 "r_mbytes_per_sec": 0, 00:32:22.521 "w_mbytes_per_sec": 0 00:32:22.521 }, 00:32:22.521 "claimed": false, 00:32:22.521 "zoned": false, 00:32:22.521 "supported_io_types": { 00:32:22.521 "read": true, 00:32:22.521 "write": true, 00:32:22.521 "unmap": true, 00:32:22.521 "flush": false, 00:32:22.521 "reset": true, 00:32:22.521 "nvme_admin": false, 00:32:22.521 "nvme_io": false, 00:32:22.521 "nvme_io_md": false, 00:32:22.521 "write_zeroes": true, 00:32:22.521 "zcopy": false, 00:32:22.521 "get_zone_info": false, 00:32:22.521 "zone_management": false, 00:32:22.521 "zone_append": false, 00:32:22.521 "compare": false, 00:32:22.521 "compare_and_write": false, 00:32:22.521 "abort": false, 00:32:22.521 "seek_hole": true, 00:32:22.521 "seek_data": true, 00:32:22.521 "copy": false, 00:32:22.521 "nvme_iov_md": false 00:32:22.521 }, 00:32:22.521 "driver_specific": { 00:32:22.521 "lvol": { 00:32:22.521 "lvol_store_uuid": "ab9b8b60-a45a-4060-9856-acd3cdfa2061", 00:32:22.521 "base_bdev": "basen1", 00:32:22.521 "thin_provision": true, 00:32:22.521 "num_allocated_clusters": 0, 00:32:22.521 "snapshot": false, 00:32:22.521 "clone": false, 00:32:22.521 "esnap_clone": false 00:32:22.521 } 00:32:22.521 } 00:32:22.521 } 00:32:22.521 ]' 00:32:22.521 22:03:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:22.780 22:03:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:32:22.780 22:03:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:22.780 22:03:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:32:22.780 22:03:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:32:22.780 22:03:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:32:22.780 22:03:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:32:22.780 22:03:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:32:22.780 22:03:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:32:23.038 22:03:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:32:23.038 22:03:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:32:23.038 22:03:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:32:23.298 22:03:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:32:23.298 22:03:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:32:23.298 22:03:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 3aa03581-b449-4066-b599-b5fbc256bdcd -c cachen1p0 --l2p_dram_limit 2 00:32:23.298 [2024-12-10 22:03:30.980004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.298 [2024-12-10 22:03:30.980086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:32:23.298 [2024-12-10 22:03:30.980106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:32:23.298 [2024-12-10 22:03:30.980118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.298 [2024-12-10 22:03:30.980192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.298 [2024-12-10 22:03:30.980205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:23.298 [2024-12-10 22:03:30.980229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:32:23.298 [2024-12-10 22:03:30.980239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.298 [2024-12-10 22:03:30.980263] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:32:23.298 [2024-12-10 22:03:30.981321] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:32:23.298 [2024-12-10 22:03:30.981351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.298 [2024-12-10 22:03:30.981361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:23.298 [2024-12-10 22:03:30.981377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.091 ms 00:32:23.298 [2024-12-10 22:03:30.981387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.298 [2024-12-10 22:03:30.981462] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 5c1ae8f0-3b39-403a-9320-2fdfd418b7f8 00:32:23.298 [2024-12-10 22:03:30.983550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.298 [2024-12-10 22:03:30.983586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:32:23.298 [2024-12-10 22:03:30.983599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:32:23.298 [2024-12-10 22:03:30.983613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.298 [2024-12-10 22:03:30.995569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.298 [2024-12-10 22:03:30.995614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:23.298 [2024-12-10 22:03:30.995643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.923 ms 00:32:23.298 [2024-12-10 22:03:30.995656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.298 [2024-12-10 22:03:30.995706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.298 [2024-12-10 22:03:30.995723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:23.298 [2024-12-10 22:03:30.995734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:32:23.298 [2024-12-10 22:03:30.995749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.298 [2024-12-10 22:03:30.995817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.298 [2024-12-10 22:03:30.995834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:32:23.298 [2024-12-10 22:03:30.995845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:32:23.298 [2024-12-10 22:03:30.995862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.298 [2024-12-10 22:03:30.995888] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:32:23.298 [2024-12-10 22:03:31.001792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.298 [2024-12-10 22:03:31.001821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:23.298 [2024-12-10 22:03:31.001837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.918 ms 00:32:23.298 [2024-12-10 22:03:31.001847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.298 [2024-12-10 22:03:31.001881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.298 [2024-12-10 22:03:31.001892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:32:23.298 [2024-12-10 22:03:31.001904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:23.298 [2024-12-10 22:03:31.001914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.298 [2024-12-10 22:03:31.001949] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:32:23.298 [2024-12-10 22:03:31.002101] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:32:23.298 [2024-12-10 22:03:31.002123] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:32:23.298 [2024-12-10 22:03:31.002136] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:32:23.298 [2024-12-10 22:03:31.002167] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:32:23.298 [2024-12-10 22:03:31.002179] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:32:23.298 [2024-12-10 22:03:31.002193] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:32:23.298 [2024-12-10 22:03:31.002219] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:32:23.298 [2024-12-10 22:03:31.002237] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:32:23.298 [2024-12-10 22:03:31.002247] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:32:23.298 [2024-12-10 22:03:31.002261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.298 [2024-12-10 22:03:31.002271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:32:23.298 [2024-12-10 22:03:31.002285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.313 ms 00:32:23.298 [2024-12-10 22:03:31.002295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.298 [2024-12-10 22:03:31.002374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.298 [2024-12-10 22:03:31.002395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:32:23.298 [2024-12-10 22:03:31.002409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:32:23.298 [2024-12-10 22:03:31.002419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.298 [2024-12-10 22:03:31.002519] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:32:23.298 [2024-12-10 22:03:31.002532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:32:23.298 [2024-12-10 22:03:31.002547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:23.298 [2024-12-10 22:03:31.002557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:23.298 [2024-12-10 22:03:31.002570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:32:23.298 [2024-12-10 22:03:31.002580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:32:23.298 [2024-12-10 22:03:31.002592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:32:23.298 [2024-12-10 22:03:31.002603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:32:23.298 [2024-12-10 22:03:31.002615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:32:23.298 [2024-12-10 22:03:31.002624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:23.298 [2024-12-10 22:03:31.002636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:32:23.298 [2024-12-10 22:03:31.002646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:32:23.298 [2024-12-10 22:03:31.002660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:23.298 [2024-12-10 22:03:31.002669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:32:23.298 [2024-12-10 22:03:31.002682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:32:23.298 [2024-12-10 22:03:31.002691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:23.298 [2024-12-10 22:03:31.002706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:32:23.299 [2024-12-10 22:03:31.002716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:32:23.299 [2024-12-10 22:03:31.002727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:23.299 [2024-12-10 22:03:31.002736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:32:23.299 [2024-12-10 22:03:31.002748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:32:23.299 [2024-12-10 22:03:31.002757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:23.299 [2024-12-10 22:03:31.002768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:32:23.299 [2024-12-10 22:03:31.002778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:32:23.299 [2024-12-10 22:03:31.002789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:23.299 [2024-12-10 22:03:31.002798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:32:23.299 [2024-12-10 22:03:31.002809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:32:23.299 [2024-12-10 22:03:31.002818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:23.299 [2024-12-10 22:03:31.002830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:32:23.299 [2024-12-10 22:03:31.002839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:32:23.299 [2024-12-10 22:03:31.002850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:23.299 [2024-12-10 22:03:31.002859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:32:23.299 [2024-12-10 22:03:31.002873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:32:23.299 [2024-12-10 22:03:31.002882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:23.299 [2024-12-10 22:03:31.002894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:32:23.299 [2024-12-10 22:03:31.002903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:32:23.299 [2024-12-10 22:03:31.002914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:23.299 [2024-12-10 22:03:31.002923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:32:23.299 [2024-12-10 22:03:31.002937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:32:23.299 [2024-12-10 22:03:31.002948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:23.299 [2024-12-10 22:03:31.002959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:32:23.299 [2024-12-10 22:03:31.002968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:32:23.299 [2024-12-10 22:03:31.002980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:23.299 [2024-12-10 22:03:31.002989] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:32:23.299 [2024-12-10 22:03:31.003001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:32:23.299 [2024-12-10 22:03:31.003011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:23.299 [2024-12-10 22:03:31.003023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:23.299 [2024-12-10 22:03:31.003033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:32:23.299 [2024-12-10 22:03:31.003059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:32:23.299 [2024-12-10 22:03:31.003070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:32:23.299 [2024-12-10 22:03:31.003083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:32:23.299 [2024-12-10 22:03:31.003092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:32:23.299 [2024-12-10 22:03:31.003105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:32:23.299 [2024-12-10 22:03:31.003116] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:32:23.299 [2024-12-10 22:03:31.003132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:23.299 [2024-12-10 22:03:31.003147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:32:23.299 [2024-12-10 22:03:31.003160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:32:23.299 [2024-12-10 22:03:31.003170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:32:23.299 [2024-12-10 22:03:31.003183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:32:23.299 [2024-12-10 22:03:31.003194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:32:23.299 [2024-12-10 22:03:31.003206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:32:23.299 [2024-12-10 22:03:31.003217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:32:23.299 [2024-12-10 22:03:31.003231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:32:23.299 [2024-12-10 22:03:31.003241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:32:23.299 [2024-12-10 22:03:31.003259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:32:23.299 [2024-12-10 22:03:31.003270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:32:23.299 [2024-12-10 22:03:31.003282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:32:23.299 [2024-12-10 22:03:31.003292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:32:23.299 [2024-12-10 22:03:31.003305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:32:23.299 [2024-12-10 22:03:31.003315] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:32:23.299 [2024-12-10 22:03:31.003328] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:23.299 [2024-12-10 22:03:31.003346] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:23.299 [2024-12-10 22:03:31.003359] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:32:23.299 [2024-12-10 22:03:31.003369] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:32:23.299 [2024-12-10 22:03:31.003382] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:32:23.299 [2024-12-10 22:03:31.003393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:23.299 [2024-12-10 22:03:31.003407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:32:23.299 [2024-12-10 22:03:31.003418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.934 ms 00:32:23.299 [2024-12-10 22:03:31.003431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:23.299 [2024-12-10 22:03:31.003476] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:32:23.299 [2024-12-10 22:03:31.003495] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:32:28.571 [2024-12-10 22:03:36.219786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.571 [2024-12-10 22:03:36.219874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:32:28.571 [2024-12-10 22:03:36.219894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5224.776 ms 00:32:28.571 [2024-12-10 22:03:36.219909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.571 [2024-12-10 22:03:36.258237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.571 [2024-12-10 22:03:36.258294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:28.571 [2024-12-10 22:03:36.258310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.049 ms 00:32:28.571 [2024-12-10 22:03:36.258325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.571 [2024-12-10 22:03:36.258407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.571 [2024-12-10 22:03:36.258423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:32:28.571 [2024-12-10 22:03:36.258434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:32:28.571 [2024-12-10 22:03:36.258465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.831 [2024-12-10 22:03:36.305549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.831 [2024-12-10 22:03:36.305598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:28.831 [2024-12-10 22:03:36.305613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.086 ms 00:32:28.831 [2024-12-10 22:03:36.305626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.831 [2024-12-10 22:03:36.305662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.831 [2024-12-10 22:03:36.305680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:28.831 [2024-12-10 22:03:36.305691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:28.831 [2024-12-10 22:03:36.305703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.831 [2024-12-10 22:03:36.306522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.831 [2024-12-10 22:03:36.306546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:28.831 [2024-12-10 22:03:36.306569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.768 ms 00:32:28.831 [2024-12-10 22:03:36.306583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.831 [2024-12-10 22:03:36.306624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.831 [2024-12-10 22:03:36.306637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:28.831 [2024-12-10 22:03:36.306651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:32:28.831 [2024-12-10 22:03:36.306667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.831 [2024-12-10 22:03:36.330365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.831 [2024-12-10 22:03:36.330407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:28.831 [2024-12-10 22:03:36.330421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.715 ms 00:32:28.831 [2024-12-10 22:03:36.330434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.831 [2024-12-10 22:03:36.371818] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:32:28.831 [2024-12-10 22:03:36.373448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.831 [2024-12-10 22:03:36.373481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:32:28.831 [2024-12-10 22:03:36.373506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.973 ms 00:32:28.831 [2024-12-10 22:03:36.373521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.831 [2024-12-10 22:03:36.416300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.831 [2024-12-10 22:03:36.416338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:32:28.831 [2024-12-10 22:03:36.416356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.799 ms 00:32:28.831 [2024-12-10 22:03:36.416367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.831 [2024-12-10 22:03:36.416464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.831 [2024-12-10 22:03:36.416481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:32:28.831 [2024-12-10 22:03:36.416498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:32:28.831 [2024-12-10 22:03:36.416509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.831 [2024-12-10 22:03:36.452691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.831 [2024-12-10 22:03:36.452737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:32:28.831 [2024-12-10 22:03:36.452771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.182 ms 00:32:28.831 [2024-12-10 22:03:36.452783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.831 [2024-12-10 22:03:36.487106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.831 [2024-12-10 22:03:36.487140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:32:28.831 [2024-12-10 22:03:36.487156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.328 ms 00:32:28.831 [2024-12-10 22:03:36.487166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.831 [2024-12-10 22:03:36.487842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.831 [2024-12-10 22:03:36.487861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:32:28.831 [2024-12-10 22:03:36.487875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.636 ms 00:32:28.831 [2024-12-10 22:03:36.487887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.091 [2024-12-10 22:03:36.611285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.091 [2024-12-10 22:03:36.611323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:32:29.091 [2024-12-10 22:03:36.611359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 123.542 ms 00:32:29.091 [2024-12-10 22:03:36.611370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.091 [2024-12-10 22:03:36.647222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.091 [2024-12-10 22:03:36.647260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:32:29.091 [2024-12-10 22:03:36.647277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.826 ms 00:32:29.091 [2024-12-10 22:03:36.647287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.091 [2024-12-10 22:03:36.681889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.091 [2024-12-10 22:03:36.681922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:32:29.091 [2024-12-10 22:03:36.681937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.611 ms 00:32:29.091 [2024-12-10 22:03:36.681946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.091 [2024-12-10 22:03:36.716103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.091 [2024-12-10 22:03:36.716133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:32:29.091 [2024-12-10 22:03:36.716149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.168 ms 00:32:29.091 [2024-12-10 22:03:36.716160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.091 [2024-12-10 22:03:36.716204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.091 [2024-12-10 22:03:36.716216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:32:29.091 [2024-12-10 22:03:36.716232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:32:29.091 [2024-12-10 22:03:36.716241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.091 [2024-12-10 22:03:36.716340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:29.091 [2024-12-10 22:03:36.716355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:32:29.091 [2024-12-10 22:03:36.716367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:32:29.091 [2024-12-10 22:03:36.716376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:29.091 [2024-12-10 22:03:36.717678] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 5746.551 ms, result 0 00:32:29.091 { 00:32:29.091 "name": "ftl", 00:32:29.091 "uuid": "5c1ae8f0-3b39-403a-9320-2fdfd418b7f8" 00:32:29.091 } 00:32:29.091 22:03:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:32:29.350 [2024-12-10 22:03:36.992332] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:29.350 22:03:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:32:29.609 22:03:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:32:29.868 [2024-12-10 22:03:37.368118] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:32:29.868 22:03:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:32:29.868 [2024-12-10 22:03:37.561891] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:29.868 22:03:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:32:30.436 22:03:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:32:30.436 22:03:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:32:30.436 22:03:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:32:30.436 22:03:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:32:30.436 22:03:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:32:30.436 22:03:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:32:30.436 22:03:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:32:30.436 22:03:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:32:30.436 22:03:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:32:30.436 22:03:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:32:30.436 Fill FTL, iteration 1 00:32:30.436 22:03:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:32:30.436 22:03:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:32:30.436 22:03:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:30.436 22:03:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:30.436 22:03:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:30.436 22:03:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:32:30.436 22:03:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:32:30.436 22:03:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=85368 00:32:30.436 22:03:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:32:30.436 22:03:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 85368 /var/tmp/spdk.tgt.sock 00:32:30.436 22:03:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 85368 ']' 00:32:30.436 22:03:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:32:30.436 22:03:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:30.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:32:30.436 22:03:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:32:30.436 22:03:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:30.436 22:03:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:30.436 [2024-12-10 22:03:38.014584] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:32:30.436 [2024-12-10 22:03:38.014733] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85368 ] 00:32:30.695 [2024-12-10 22:03:38.196716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:30.695 [2024-12-10 22:03:38.312517] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:31.632 22:03:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:31.632 22:03:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:32:31.632 22:03:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:32:31.891 ftln1 00:32:31.891 22:03:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:32:31.891 22:03:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:32:32.150 22:03:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:32:32.150 22:03:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 85368 00:32:32.150 22:03:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 85368 ']' 00:32:32.150 22:03:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 85368 00:32:32.150 22:03:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:32:32.150 22:03:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:32.150 22:03:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85368 00:32:32.150 killing process with pid 85368 00:32:32.150 22:03:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:32.150 22:03:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:32.150 22:03:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85368' 00:32:32.150 22:03:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 85368 00:32:32.150 22:03:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 85368 00:32:34.685 22:03:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:32:34.685 22:03:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:32:34.685 [2024-12-10 22:03:42.221733] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:32:34.685 [2024-12-10 22:03:42.221872] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85425 ] 00:32:34.685 [2024-12-10 22:03:42.405401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:34.944 [2024-12-10 22:03:42.536407] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:36.322  [2024-12-10T22:03:45.429Z] Copying: 264/1024 [MB] (264 MBps) [2024-12-10T22:03:46.364Z] Copying: 518/1024 [MB] (254 MBps) [2024-12-10T22:03:47.299Z] Copying: 752/1024 [MB] (234 MBps) [2024-12-10T22:03:47.299Z] Copying: 992/1024 [MB] (240 MBps) [2024-12-10T22:03:48.677Z] Copying: 1024/1024 [MB] (average 247 MBps) 00:32:40.946 00:32:40.946 Calculate MD5 checksum, iteration 1 00:32:40.946 22:03:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:32:40.946 22:03:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:32:40.947 22:03:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:40.947 22:03:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:40.947 22:03:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:40.947 22:03:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:40.947 22:03:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:40.947 22:03:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:40.947 [2024-12-10 22:03:48.475903] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:32:40.947 [2024-12-10 22:03:48.476242] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85503 ] 00:32:40.947 [2024-12-10 22:03:48.658643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:41.205 [2024-12-10 22:03:48.784907] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:42.583  [2024-12-10T22:03:51.251Z] Copying: 630/1024 [MB] (630 MBps) [2024-12-10T22:03:52.215Z] Copying: 1024/1024 [MB] (average 618 MBps) 00:32:44.484 00:32:44.484 22:03:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:32:44.484 22:03:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:46.397 22:03:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:32:46.397 Fill FTL, iteration 2 00:32:46.397 22:03:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=ba39025bb6dbbc9b981b9e0f8bb83a4b 00:32:46.397 22:03:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:32:46.397 22:03:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:32:46.397 22:03:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:32:46.397 22:03:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:32:46.397 22:03:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:46.397 22:03:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:46.397 22:03:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:46.397 22:03:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:46.397 22:03:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:32:46.397 [2024-12-10 22:03:53.698633] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:32:46.397 [2024-12-10 22:03:53.698992] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85560 ] 00:32:46.397 [2024-12-10 22:03:53.883011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:46.397 [2024-12-10 22:03:54.013751] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:48.301  [2024-12-10T22:03:56.599Z] Copying: 258/1024 [MB] (258 MBps) [2024-12-10T22:03:57.536Z] Copying: 504/1024 [MB] (246 MBps) [2024-12-10T22:03:58.914Z] Copying: 748/1024 [MB] (244 MBps) [2024-12-10T22:03:58.914Z] Copying: 992/1024 [MB] (244 MBps) [2024-12-10T22:03:59.851Z] Copying: 1024/1024 [MB] (average 247 MBps) 00:32:52.120 00:32:52.379 Calculate MD5 checksum, iteration 2 00:32:52.379 22:03:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:32:52.379 22:03:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:32:52.379 22:03:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:52.379 22:03:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:52.379 22:03:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:52.379 22:03:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:52.379 22:03:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:52.379 22:03:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:52.379 [2024-12-10 22:03:59.932655] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:32:52.379 [2024-12-10 22:03:59.932997] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85629 ] 00:32:52.638 [2024-12-10 22:04:00.113681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:52.638 [2024-12-10 22:04:00.242533] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:54.542  [2024-12-10T22:04:02.840Z] Copying: 645/1024 [MB] (645 MBps) [2024-12-10T22:04:04.219Z] Copying: 1024/1024 [MB] (average 633 MBps) 00:32:56.488 00:32:56.488 22:04:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:32:56.488 22:04:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:58.391 22:04:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:32:58.391 22:04:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=2dfc4b9fdd9e5d2ec5f50975a0983330 00:32:58.391 22:04:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:32:58.391 22:04:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:32:58.391 22:04:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:32:58.391 [2024-12-10 22:04:05.930699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:58.391 [2024-12-10 22:04:05.930756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:58.391 [2024-12-10 22:04:05.930773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:32:58.391 [2024-12-10 22:04:05.930800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:58.391 [2024-12-10 22:04:05.930831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:58.391 [2024-12-10 22:04:05.930847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:58.391 [2024-12-10 22:04:05.930859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:58.391 [2024-12-10 22:04:05.930870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:58.392 [2024-12-10 22:04:05.930891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:58.392 [2024-12-10 22:04:05.930903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:58.392 [2024-12-10 22:04:05.930914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:58.392 [2024-12-10 22:04:05.930924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:58.392 [2024-12-10 22:04:05.931013] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.282 ms, result 0 00:32:58.392 true 00:32:58.392 22:04:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:58.650 { 00:32:58.650 "name": "ftl", 00:32:58.650 "properties": [ 00:32:58.650 { 00:32:58.650 "name": "superblock_version", 00:32:58.650 "value": 5, 00:32:58.650 "read-only": true 00:32:58.650 }, 00:32:58.650 { 00:32:58.650 "name": "base_device", 00:32:58.650 "bands": [ 00:32:58.650 { 00:32:58.650 "id": 0, 00:32:58.650 "state": "FREE", 00:32:58.650 "validity": 0.0 00:32:58.650 }, 00:32:58.650 { 00:32:58.650 "id": 1, 00:32:58.650 "state": "FREE", 00:32:58.650 "validity": 0.0 00:32:58.650 }, 00:32:58.650 { 00:32:58.650 "id": 2, 00:32:58.650 "state": "FREE", 00:32:58.650 "validity": 0.0 00:32:58.650 }, 00:32:58.650 { 00:32:58.650 "id": 3, 00:32:58.650 "state": "FREE", 00:32:58.650 "validity": 0.0 00:32:58.650 }, 00:32:58.650 { 00:32:58.650 "id": 4, 00:32:58.650 "state": "FREE", 00:32:58.650 "validity": 0.0 00:32:58.650 }, 00:32:58.650 { 00:32:58.650 "id": 5, 00:32:58.650 "state": "FREE", 00:32:58.650 "validity": 0.0 00:32:58.650 }, 00:32:58.650 { 00:32:58.650 "id": 6, 00:32:58.650 "state": "FREE", 00:32:58.650 "validity": 0.0 00:32:58.650 }, 00:32:58.650 { 00:32:58.650 "id": 7, 00:32:58.650 "state": "FREE", 00:32:58.650 "validity": 0.0 00:32:58.650 }, 00:32:58.650 { 00:32:58.650 "id": 8, 00:32:58.650 "state": "FREE", 00:32:58.650 "validity": 0.0 00:32:58.650 }, 00:32:58.650 { 00:32:58.650 "id": 9, 00:32:58.650 "state": "FREE", 00:32:58.650 "validity": 0.0 00:32:58.650 }, 00:32:58.650 { 00:32:58.650 "id": 10, 00:32:58.650 "state": "FREE", 00:32:58.650 "validity": 0.0 00:32:58.650 }, 00:32:58.650 { 00:32:58.650 "id": 11, 00:32:58.650 "state": "FREE", 00:32:58.650 "validity": 0.0 00:32:58.650 }, 00:32:58.650 { 00:32:58.650 "id": 12, 00:32:58.650 "state": "FREE", 00:32:58.650 "validity": 0.0 00:32:58.650 }, 00:32:58.650 { 00:32:58.650 "id": 13, 00:32:58.650 "state": "FREE", 00:32:58.650 "validity": 0.0 00:32:58.650 }, 00:32:58.650 { 00:32:58.650 "id": 14, 00:32:58.650 "state": "FREE", 00:32:58.650 "validity": 0.0 00:32:58.650 }, 00:32:58.650 { 00:32:58.650 "id": 15, 00:32:58.650 "state": "FREE", 00:32:58.650 "validity": 0.0 00:32:58.650 }, 00:32:58.650 { 00:32:58.650 "id": 16, 00:32:58.650 "state": "FREE", 00:32:58.650 "validity": 0.0 00:32:58.650 }, 00:32:58.650 { 00:32:58.650 "id": 17, 00:32:58.650 "state": "FREE", 00:32:58.650 "validity": 0.0 00:32:58.650 } 00:32:58.650 ], 00:32:58.650 "read-only": true 00:32:58.650 }, 00:32:58.650 { 00:32:58.650 "name": "cache_device", 00:32:58.650 "type": "bdev", 00:32:58.650 "chunks": [ 00:32:58.650 { 00:32:58.650 "id": 0, 00:32:58.650 "state": "INACTIVE", 00:32:58.650 "utilization": 0.0 00:32:58.650 }, 00:32:58.650 { 00:32:58.650 "id": 1, 00:32:58.650 "state": "CLOSED", 00:32:58.650 "utilization": 1.0 00:32:58.650 }, 00:32:58.650 { 00:32:58.650 "id": 2, 00:32:58.650 "state": "CLOSED", 00:32:58.650 "utilization": 1.0 00:32:58.650 }, 00:32:58.650 { 00:32:58.650 "id": 3, 00:32:58.650 "state": "OPEN", 00:32:58.650 "utilization": 0.001953125 00:32:58.650 }, 00:32:58.650 { 00:32:58.650 "id": 4, 00:32:58.650 "state": "OPEN", 00:32:58.650 "utilization": 0.0 00:32:58.650 } 00:32:58.650 ], 00:32:58.650 "read-only": true 00:32:58.650 }, 00:32:58.650 { 00:32:58.650 "name": "verbose_mode", 00:32:58.650 "value": true, 00:32:58.650 "unit": "", 00:32:58.650 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:32:58.650 }, 00:32:58.650 { 00:32:58.650 "name": "prep_upgrade_on_shutdown", 00:32:58.650 "value": false, 00:32:58.650 "unit": "", 00:32:58.650 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:32:58.650 } 00:32:58.650 ] 00:32:58.650 } 00:32:58.650 22:04:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:32:58.650 [2024-12-10 22:04:06.346679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:58.650 [2024-12-10 22:04:06.346730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:58.650 [2024-12-10 22:04:06.346746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:32:58.650 [2024-12-10 22:04:06.346756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:58.650 [2024-12-10 22:04:06.346781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:58.651 [2024-12-10 22:04:06.346793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:58.651 [2024-12-10 22:04:06.346804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:58.651 [2024-12-10 22:04:06.346814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:58.651 [2024-12-10 22:04:06.346834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:58.651 [2024-12-10 22:04:06.346844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:58.651 [2024-12-10 22:04:06.346855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:58.651 [2024-12-10 22:04:06.346864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:58.651 [2024-12-10 22:04:06.346921] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.240 ms, result 0 00:32:58.651 true 00:32:58.651 22:04:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:32:58.651 22:04:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:58.651 22:04:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:32:58.909 22:04:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:32:58.909 22:04:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:32:58.909 22:04:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:32:59.169 [2024-12-10 22:04:06.754646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:59.169 [2024-12-10 22:04:06.754916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:59.169 [2024-12-10 22:04:06.754960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:32:59.169 [2024-12-10 22:04:06.754972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:59.169 [2024-12-10 22:04:06.755016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:59.169 [2024-12-10 22:04:06.755030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:59.169 [2024-12-10 22:04:06.755042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:59.169 [2024-12-10 22:04:06.755052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:59.169 [2024-12-10 22:04:06.755089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:59.169 [2024-12-10 22:04:06.755102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:59.169 [2024-12-10 22:04:06.755114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:59.169 [2024-12-10 22:04:06.755124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:59.169 [2024-12-10 22:04:06.755194] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.528 ms, result 0 00:32:59.169 true 00:32:59.169 22:04:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:59.428 { 00:32:59.428 "name": "ftl", 00:32:59.428 "properties": [ 00:32:59.428 { 00:32:59.428 "name": "superblock_version", 00:32:59.428 "value": 5, 00:32:59.428 "read-only": true 00:32:59.428 }, 00:32:59.428 { 00:32:59.428 "name": "base_device", 00:32:59.428 "bands": [ 00:32:59.428 { 00:32:59.428 "id": 0, 00:32:59.428 "state": "FREE", 00:32:59.428 "validity": 0.0 00:32:59.428 }, 00:32:59.428 { 00:32:59.428 "id": 1, 00:32:59.428 "state": "FREE", 00:32:59.428 "validity": 0.0 00:32:59.428 }, 00:32:59.428 { 00:32:59.428 "id": 2, 00:32:59.428 "state": "FREE", 00:32:59.428 "validity": 0.0 00:32:59.428 }, 00:32:59.428 { 00:32:59.428 "id": 3, 00:32:59.428 "state": "FREE", 00:32:59.428 "validity": 0.0 00:32:59.428 }, 00:32:59.428 { 00:32:59.428 "id": 4, 00:32:59.428 "state": "FREE", 00:32:59.428 "validity": 0.0 00:32:59.428 }, 00:32:59.428 { 00:32:59.428 "id": 5, 00:32:59.428 "state": "FREE", 00:32:59.428 "validity": 0.0 00:32:59.428 }, 00:32:59.428 { 00:32:59.428 "id": 6, 00:32:59.428 "state": "FREE", 00:32:59.428 "validity": 0.0 00:32:59.428 }, 00:32:59.428 { 00:32:59.428 "id": 7, 00:32:59.428 "state": "FREE", 00:32:59.428 "validity": 0.0 00:32:59.428 }, 00:32:59.428 { 00:32:59.428 "id": 8, 00:32:59.428 "state": "FREE", 00:32:59.428 "validity": 0.0 00:32:59.428 }, 00:32:59.428 { 00:32:59.428 "id": 9, 00:32:59.428 "state": "FREE", 00:32:59.428 "validity": 0.0 00:32:59.428 }, 00:32:59.428 { 00:32:59.428 "id": 10, 00:32:59.428 "state": "FREE", 00:32:59.428 "validity": 0.0 00:32:59.428 }, 00:32:59.428 { 00:32:59.428 "id": 11, 00:32:59.428 "state": "FREE", 00:32:59.428 "validity": 0.0 00:32:59.428 }, 00:32:59.428 { 00:32:59.428 "id": 12, 00:32:59.428 "state": "FREE", 00:32:59.428 "validity": 0.0 00:32:59.428 }, 00:32:59.428 { 00:32:59.428 "id": 13, 00:32:59.428 "state": "FREE", 00:32:59.428 "validity": 0.0 00:32:59.428 }, 00:32:59.428 { 00:32:59.428 "id": 14, 00:32:59.428 "state": "FREE", 00:32:59.428 "validity": 0.0 00:32:59.428 }, 00:32:59.428 { 00:32:59.428 "id": 15, 00:32:59.428 "state": "FREE", 00:32:59.428 "validity": 0.0 00:32:59.428 }, 00:32:59.428 { 00:32:59.428 "id": 16, 00:32:59.428 "state": "FREE", 00:32:59.428 "validity": 0.0 00:32:59.428 }, 00:32:59.428 { 00:32:59.428 "id": 17, 00:32:59.428 "state": "FREE", 00:32:59.428 "validity": 0.0 00:32:59.428 } 00:32:59.428 ], 00:32:59.428 "read-only": true 00:32:59.428 }, 00:32:59.428 { 00:32:59.428 "name": "cache_device", 00:32:59.428 "type": "bdev", 00:32:59.428 "chunks": [ 00:32:59.428 { 00:32:59.428 "id": 0, 00:32:59.428 "state": "INACTIVE", 00:32:59.428 "utilization": 0.0 00:32:59.428 }, 00:32:59.428 { 00:32:59.428 "id": 1, 00:32:59.428 "state": "CLOSED", 00:32:59.428 "utilization": 1.0 00:32:59.428 }, 00:32:59.428 { 00:32:59.428 "id": 2, 00:32:59.428 "state": "CLOSED", 00:32:59.428 "utilization": 1.0 00:32:59.428 }, 00:32:59.428 { 00:32:59.428 "id": 3, 00:32:59.428 "state": "OPEN", 00:32:59.428 "utilization": 0.001953125 00:32:59.428 }, 00:32:59.428 { 00:32:59.428 "id": 4, 00:32:59.429 "state": "OPEN", 00:32:59.429 "utilization": 0.0 00:32:59.429 } 00:32:59.429 ], 00:32:59.429 "read-only": true 00:32:59.429 }, 00:32:59.429 { 00:32:59.429 "name": "verbose_mode", 00:32:59.429 "value": true, 00:32:59.429 "unit": "", 00:32:59.429 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:32:59.429 }, 00:32:59.429 { 00:32:59.429 "name": "prep_upgrade_on_shutdown", 00:32:59.429 "value": true, 00:32:59.429 "unit": "", 00:32:59.429 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:32:59.429 } 00:32:59.429 ] 00:32:59.429 } 00:32:59.429 22:04:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:32:59.429 22:04:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 85229 ]] 00:32:59.429 22:04:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 85229 00:32:59.429 22:04:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 85229 ']' 00:32:59.429 22:04:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 85229 00:32:59.429 22:04:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:32:59.429 22:04:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:59.429 22:04:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85229 00:32:59.429 killing process with pid 85229 00:32:59.429 22:04:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:59.429 22:04:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:59.429 22:04:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85229' 00:32:59.429 22:04:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 85229 00:32:59.429 22:04:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 85229 00:33:00.807 [2024-12-10 22:04:08.104471] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:33:00.807 [2024-12-10 22:04:08.123542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:00.807 [2024-12-10 22:04:08.123581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:33:00.807 [2024-12-10 22:04:08.123597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:00.807 [2024-12-10 22:04:08.123608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:00.807 [2024-12-10 22:04:08.123631] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:33:00.807 [2024-12-10 22:04:08.128513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:00.807 [2024-12-10 22:04:08.128538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:33:00.807 [2024-12-10 22:04:08.128551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.873 ms 00:33:00.807 [2024-12-10 22:04:08.128567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:08.929 [2024-12-10 22:04:15.413821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:08.929 [2024-12-10 22:04:15.413889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:33:08.929 [2024-12-10 22:04:15.413912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7297.052 ms 00:33:08.929 [2024-12-10 22:04:15.413923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:08.929 [2024-12-10 22:04:15.415097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:08.929 [2024-12-10 22:04:15.415125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:33:08.929 [2024-12-10 22:04:15.415140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.158 ms 00:33:08.929 [2024-12-10 22:04:15.415153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:08.929 [2024-12-10 22:04:15.416109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:08.929 [2024-12-10 22:04:15.416143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:33:08.929 [2024-12-10 22:04:15.416155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.927 ms 00:33:08.929 [2024-12-10 22:04:15.416173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:08.929 [2024-12-10 22:04:15.431097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:08.929 [2024-12-10 22:04:15.431132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:33:08.929 [2024-12-10 22:04:15.431145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.911 ms 00:33:08.929 [2024-12-10 22:04:15.431155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:08.929 [2024-12-10 22:04:15.439867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:08.929 [2024-12-10 22:04:15.439903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:33:08.929 [2024-12-10 22:04:15.439917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.691 ms 00:33:08.929 [2024-12-10 22:04:15.439928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:08.929 [2024-12-10 22:04:15.440031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:08.929 [2024-12-10 22:04:15.440062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:33:08.929 [2024-12-10 22:04:15.440073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:33:08.929 [2024-12-10 22:04:15.440084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:08.929 [2024-12-10 22:04:15.453808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:08.929 [2024-12-10 22:04:15.453838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:33:08.929 [2024-12-10 22:04:15.453849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.730 ms 00:33:08.929 [2024-12-10 22:04:15.453859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:08.929 [2024-12-10 22:04:15.467992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:08.929 [2024-12-10 22:04:15.468021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:33:08.929 [2024-12-10 22:04:15.468032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.122 ms 00:33:08.929 [2024-12-10 22:04:15.468042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:08.929 [2024-12-10 22:04:15.482222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:08.929 [2024-12-10 22:04:15.482251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:33:08.929 [2024-12-10 22:04:15.482263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.162 ms 00:33:08.929 [2024-12-10 22:04:15.482273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:08.929 [2024-12-10 22:04:15.495789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:08.929 [2024-12-10 22:04:15.495817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:33:08.929 [2024-12-10 22:04:15.495828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.467 ms 00:33:08.929 [2024-12-10 22:04:15.495838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:08.929 [2024-12-10 22:04:15.495870] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:33:08.929 [2024-12-10 22:04:15.495897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:33:08.929 [2024-12-10 22:04:15.495910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:33:08.929 [2024-12-10 22:04:15.495921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:33:08.929 [2024-12-10 22:04:15.495932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:08.929 [2024-12-10 22:04:15.495942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:08.929 [2024-12-10 22:04:15.495953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:08.929 [2024-12-10 22:04:15.495963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:08.929 [2024-12-10 22:04:15.495976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:08.929 [2024-12-10 22:04:15.495985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:08.929 [2024-12-10 22:04:15.495995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:08.929 [2024-12-10 22:04:15.496005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:08.930 [2024-12-10 22:04:15.496016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:08.930 [2024-12-10 22:04:15.496025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:08.930 [2024-12-10 22:04:15.496035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:08.930 [2024-12-10 22:04:15.496046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:08.930 [2024-12-10 22:04:15.496066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:08.930 [2024-12-10 22:04:15.496076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:08.930 [2024-12-10 22:04:15.496086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:08.930 [2024-12-10 22:04:15.496099] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:33:08.930 [2024-12-10 22:04:15.496108] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 5c1ae8f0-3b39-403a-9320-2fdfd418b7f8 00:33:08.930 [2024-12-10 22:04:15.496123] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:33:08.930 [2024-12-10 22:04:15.496133] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:33:08.930 [2024-12-10 22:04:15.496142] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:33:08.930 [2024-12-10 22:04:15.496153] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:33:08.930 [2024-12-10 22:04:15.496167] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:33:08.930 [2024-12-10 22:04:15.496177] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:33:08.930 [2024-12-10 22:04:15.496191] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:33:08.930 [2024-12-10 22:04:15.496202] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:33:08.930 [2024-12-10 22:04:15.496210] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:33:08.930 [2024-12-10 22:04:15.496219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:08.930 [2024-12-10 22:04:15.496229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:33:08.930 [2024-12-10 22:04:15.496239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.350 ms 00:33:08.930 [2024-12-10 22:04:15.496249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:08.930 [2024-12-10 22:04:15.514877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:08.930 [2024-12-10 22:04:15.514906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:33:08.930 [2024-12-10 22:04:15.514924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.640 ms 00:33:08.930 [2024-12-10 22:04:15.514934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:08.930 [2024-12-10 22:04:15.515483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:08.930 [2024-12-10 22:04:15.515496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:33:08.930 [2024-12-10 22:04:15.515507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.518 ms 00:33:08.930 [2024-12-10 22:04:15.515517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:08.930 [2024-12-10 22:04:15.578928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:08.930 [2024-12-10 22:04:15.578963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:08.930 [2024-12-10 22:04:15.578975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:08.930 [2024-12-10 22:04:15.578986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:08.930 [2024-12-10 22:04:15.579016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:08.930 [2024-12-10 22:04:15.579027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:08.930 [2024-12-10 22:04:15.579037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:08.930 [2024-12-10 22:04:15.579047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:08.930 [2024-12-10 22:04:15.579149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:08.930 [2024-12-10 22:04:15.579164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:08.930 [2024-12-10 22:04:15.579180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:08.930 [2024-12-10 22:04:15.579190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:08.930 [2024-12-10 22:04:15.579207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:08.930 [2024-12-10 22:04:15.579218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:08.930 [2024-12-10 22:04:15.579227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:08.930 [2024-12-10 22:04:15.579237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:08.930 [2024-12-10 22:04:15.700529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:08.930 [2024-12-10 22:04:15.700571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:08.930 [2024-12-10 22:04:15.700591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:08.930 [2024-12-10 22:04:15.700602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:08.930 [2024-12-10 22:04:15.799396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:08.930 [2024-12-10 22:04:15.799442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:08.930 [2024-12-10 22:04:15.799456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:08.930 [2024-12-10 22:04:15.799467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:08.930 [2024-12-10 22:04:15.799572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:08.930 [2024-12-10 22:04:15.799596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:08.930 [2024-12-10 22:04:15.799607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:08.930 [2024-12-10 22:04:15.799618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:08.930 [2024-12-10 22:04:15.799668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:08.930 [2024-12-10 22:04:15.799679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:08.930 [2024-12-10 22:04:15.799689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:08.930 [2024-12-10 22:04:15.799699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:08.930 [2024-12-10 22:04:15.799810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:08.930 [2024-12-10 22:04:15.799824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:08.930 [2024-12-10 22:04:15.799833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:08.930 [2024-12-10 22:04:15.799843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:08.930 [2024-12-10 22:04:15.799888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:08.930 [2024-12-10 22:04:15.799901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:33:08.930 [2024-12-10 22:04:15.799912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:08.930 [2024-12-10 22:04:15.799921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:08.930 [2024-12-10 22:04:15.799963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:08.930 [2024-12-10 22:04:15.799975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:08.930 [2024-12-10 22:04:15.799985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:08.930 [2024-12-10 22:04:15.799995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:08.930 [2024-12-10 22:04:15.800043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:08.930 [2024-12-10 22:04:15.800055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:08.930 [2024-12-10 22:04:15.800094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:08.930 [2024-12-10 22:04:15.800104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:08.930 [2024-12-10 22:04:15.800237] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7689.118 ms, result 0 00:33:12.221 22:04:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:33:12.221 22:04:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:33:12.221 22:04:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:33:12.221 22:04:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:33:12.221 22:04:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:12.221 22:04:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85832 00:33:12.221 22:04:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:33:12.221 22:04:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:12.221 22:04:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85832 00:33:12.221 22:04:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 85832 ']' 00:33:12.221 22:04:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:12.221 22:04:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:12.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:12.221 22:04:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:12.221 22:04:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:12.221 22:04:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:12.221 [2024-12-10 22:04:19.565580] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:33:12.221 [2024-12-10 22:04:19.565719] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85832 ] 00:33:12.221 [2024-12-10 22:04:19.748506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:12.221 [2024-12-10 22:04:19.861632] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:13.157 [2024-12-10 22:04:20.845454] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:13.157 [2024-12-10 22:04:20.845522] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:13.415 [2024-12-10 22:04:20.991834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.415 [2024-12-10 22:04:20.991881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:33:13.415 [2024-12-10 22:04:20.991897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:13.415 [2024-12-10 22:04:20.991908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.415 [2024-12-10 22:04:20.991962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.415 [2024-12-10 22:04:20.991974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:13.415 [2024-12-10 22:04:20.991984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:33:13.415 [2024-12-10 22:04:20.991994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.415 [2024-12-10 22:04:20.992033] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:33:13.416 [2024-12-10 22:04:20.993027] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:33:13.416 [2024-12-10 22:04:20.993073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.416 [2024-12-10 22:04:20.993084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:13.416 [2024-12-10 22:04:20.993095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.062 ms 00:33:13.416 [2024-12-10 22:04:20.993106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.416 [2024-12-10 22:04:20.994572] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:33:13.416 [2024-12-10 22:04:21.012930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.416 [2024-12-10 22:04:21.012968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:33:13.416 [2024-12-10 22:04:21.012988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.389 ms 00:33:13.416 [2024-12-10 22:04:21.012999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.416 [2024-12-10 22:04:21.013080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.416 [2024-12-10 22:04:21.013093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:33:13.416 [2024-12-10 22:04:21.013104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:33:13.416 [2024-12-10 22:04:21.013113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.416 [2024-12-10 22:04:21.019917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.416 [2024-12-10 22:04:21.019949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:13.416 [2024-12-10 22:04:21.019982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.739 ms 00:33:13.416 [2024-12-10 22:04:21.019993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.416 [2024-12-10 22:04:21.020080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.416 [2024-12-10 22:04:21.020095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:13.416 [2024-12-10 22:04:21.020107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.062 ms 00:33:13.416 [2024-12-10 22:04:21.020120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.416 [2024-12-10 22:04:21.020163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.416 [2024-12-10 22:04:21.020185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:33:13.416 [2024-12-10 22:04:21.020198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:33:13.416 [2024-12-10 22:04:21.020208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.416 [2024-12-10 22:04:21.020236] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:33:13.416 [2024-12-10 22:04:21.024802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.416 [2024-12-10 22:04:21.024835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:13.416 [2024-12-10 22:04:21.024846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.578 ms 00:33:13.416 [2024-12-10 22:04:21.024860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.416 [2024-12-10 22:04:21.024889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.416 [2024-12-10 22:04:21.024900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:33:13.416 [2024-12-10 22:04:21.024910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:13.416 [2024-12-10 22:04:21.024919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.416 [2024-12-10 22:04:21.024972] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:33:13.416 [2024-12-10 22:04:21.024998] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:33:13.416 [2024-12-10 22:04:21.025035] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:33:13.416 [2024-12-10 22:04:21.025065] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:33:13.416 [2024-12-10 22:04:21.025186] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:33:13.416 [2024-12-10 22:04:21.025201] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:33:13.416 [2024-12-10 22:04:21.025214] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:33:13.416 [2024-12-10 22:04:21.025227] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:33:13.416 [2024-12-10 22:04:21.025240] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:33:13.416 [2024-12-10 22:04:21.025255] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:33:13.416 [2024-12-10 22:04:21.025265] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:33:13.416 [2024-12-10 22:04:21.025275] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:33:13.416 [2024-12-10 22:04:21.025286] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:33:13.416 [2024-12-10 22:04:21.025297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.416 [2024-12-10 22:04:21.025307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:33:13.416 [2024-12-10 22:04:21.025317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.329 ms 00:33:13.416 [2024-12-10 22:04:21.025328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.416 [2024-12-10 22:04:21.025401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.416 [2024-12-10 22:04:21.025412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:33:13.416 [2024-12-10 22:04:21.025426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:33:13.416 [2024-12-10 22:04:21.025436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.416 [2024-12-10 22:04:21.025528] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:33:13.416 [2024-12-10 22:04:21.025541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:33:13.416 [2024-12-10 22:04:21.025551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:13.416 [2024-12-10 22:04:21.025562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:13.416 [2024-12-10 22:04:21.025573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:33:13.416 [2024-12-10 22:04:21.025582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:33:13.416 [2024-12-10 22:04:21.025592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:33:13.416 [2024-12-10 22:04:21.025602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:33:13.416 [2024-12-10 22:04:21.025612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:33:13.416 [2024-12-10 22:04:21.025621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:13.416 [2024-12-10 22:04:21.025632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:33:13.416 [2024-12-10 22:04:21.025642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:33:13.416 [2024-12-10 22:04:21.025652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:13.416 [2024-12-10 22:04:21.025662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:33:13.416 [2024-12-10 22:04:21.025672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:33:13.416 [2024-12-10 22:04:21.025681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:13.416 [2024-12-10 22:04:21.025692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:33:13.416 [2024-12-10 22:04:21.025701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:33:13.416 [2024-12-10 22:04:21.025711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:13.416 [2024-12-10 22:04:21.025721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:33:13.416 [2024-12-10 22:04:21.025730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:33:13.416 [2024-12-10 22:04:21.025738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:13.416 [2024-12-10 22:04:21.025748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:33:13.416 [2024-12-10 22:04:21.025768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:33:13.416 [2024-12-10 22:04:21.025777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:13.416 [2024-12-10 22:04:21.025787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:33:13.416 [2024-12-10 22:04:21.025797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:33:13.416 [2024-12-10 22:04:21.025806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:13.416 [2024-12-10 22:04:21.025815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:33:13.416 [2024-12-10 22:04:21.025824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:33:13.416 [2024-12-10 22:04:21.025833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:13.416 [2024-12-10 22:04:21.025843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:33:13.416 [2024-12-10 22:04:21.025852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:33:13.416 [2024-12-10 22:04:21.025861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:13.416 [2024-12-10 22:04:21.025871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:33:13.416 [2024-12-10 22:04:21.025880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:33:13.416 [2024-12-10 22:04:21.025890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:13.416 [2024-12-10 22:04:21.025899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:33:13.416 [2024-12-10 22:04:21.025909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:33:13.416 [2024-12-10 22:04:21.025917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:13.416 [2024-12-10 22:04:21.025926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:33:13.416 [2024-12-10 22:04:21.025935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:33:13.416 [2024-12-10 22:04:21.025945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:13.416 [2024-12-10 22:04:21.025953] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:33:13.417 [2024-12-10 22:04:21.025964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:33:13.417 [2024-12-10 22:04:21.025973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:13.417 [2024-12-10 22:04:21.025983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:13.417 [2024-12-10 22:04:21.025998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:33:13.417 [2024-12-10 22:04:21.026007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:33:13.417 [2024-12-10 22:04:21.026016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:33:13.417 [2024-12-10 22:04:21.026026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:33:13.417 [2024-12-10 22:04:21.026035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:33:13.417 [2024-12-10 22:04:21.026045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:33:13.417 [2024-12-10 22:04:21.026056] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:33:13.417 [2024-12-10 22:04:21.026069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:13.417 [2024-12-10 22:04:21.026092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:33:13.417 [2024-12-10 22:04:21.026104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:33:13.417 [2024-12-10 22:04:21.026115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:33:13.417 [2024-12-10 22:04:21.026126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:33:13.417 [2024-12-10 22:04:21.026136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:33:13.417 [2024-12-10 22:04:21.026147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:33:13.417 [2024-12-10 22:04:21.026157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:33:13.417 [2024-12-10 22:04:21.026168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:33:13.417 [2024-12-10 22:04:21.026178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:33:13.417 [2024-12-10 22:04:21.026188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:33:13.417 [2024-12-10 22:04:21.026198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:33:13.417 [2024-12-10 22:04:21.026208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:33:13.417 [2024-12-10 22:04:21.026218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:33:13.417 [2024-12-10 22:04:21.026229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:33:13.417 [2024-12-10 22:04:21.026240] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:33:13.417 [2024-12-10 22:04:21.026251] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:13.417 [2024-12-10 22:04:21.026261] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:13.417 [2024-12-10 22:04:21.026272] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:33:13.417 [2024-12-10 22:04:21.026282] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:33:13.417 [2024-12-10 22:04:21.026293] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:33:13.417 [2024-12-10 22:04:21.026304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.417 [2024-12-10 22:04:21.026314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:33:13.417 [2024-12-10 22:04:21.026324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.834 ms 00:33:13.417 [2024-12-10 22:04:21.026334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.417 [2024-12-10 22:04:21.026381] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:33:13.417 [2024-12-10 22:04:21.026394] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:33:17.610 [2024-12-10 22:04:24.867022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:17.610 [2024-12-10 22:04:24.867094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:33:17.610 [2024-12-10 22:04:24.867112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3846.875 ms 00:33:17.610 [2024-12-10 22:04:24.867123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:17.610 [2024-12-10 22:04:24.905695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:17.610 [2024-12-10 22:04:24.905748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:17.610 [2024-12-10 22:04:24.905764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.246 ms 00:33:17.610 [2024-12-10 22:04:24.905775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:17.610 [2024-12-10 22:04:24.905863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:17.610 [2024-12-10 22:04:24.905884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:33:17.610 [2024-12-10 22:04:24.905896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:33:17.610 [2024-12-10 22:04:24.905906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:17.610 [2024-12-10 22:04:24.953428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:17.610 [2024-12-10 22:04:24.953473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:17.610 [2024-12-10 22:04:24.953488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.522 ms 00:33:17.610 [2024-12-10 22:04:24.953502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:17.610 [2024-12-10 22:04:24.953542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:17.610 [2024-12-10 22:04:24.953553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:17.610 [2024-12-10 22:04:24.953564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:17.610 [2024-12-10 22:04:24.953574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:17.610 [2024-12-10 22:04:24.954090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:17.610 [2024-12-10 22:04:24.954111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:17.610 [2024-12-10 22:04:24.954124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.449 ms 00:33:17.610 [2024-12-10 22:04:24.954135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:17.610 [2024-12-10 22:04:24.954183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:17.610 [2024-12-10 22:04:24.954195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:17.610 [2024-12-10 22:04:24.954206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:33:17.610 [2024-12-10 22:04:24.954216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:17.610 [2024-12-10 22:04:24.976058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:17.610 [2024-12-10 22:04:24.976095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:17.610 [2024-12-10 22:04:24.976108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.853 ms 00:33:17.610 [2024-12-10 22:04:24.976119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:17.610 [2024-12-10 22:04:25.005610] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:33:17.610 [2024-12-10 22:04:25.005650] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:33:17.610 [2024-12-10 22:04:25.005665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:17.610 [2024-12-10 22:04:25.005677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:33:17.610 [2024-12-10 22:04:25.005688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.482 ms 00:33:17.610 [2024-12-10 22:04:25.005697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:17.610 [2024-12-10 22:04:25.024117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:17.610 [2024-12-10 22:04:25.024155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:33:17.610 [2024-12-10 22:04:25.024169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.405 ms 00:33:17.610 [2024-12-10 22:04:25.024179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:17.610 [2024-12-10 22:04:25.040967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:17.610 [2024-12-10 22:04:25.041001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:33:17.610 [2024-12-10 22:04:25.041014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.771 ms 00:33:17.610 [2024-12-10 22:04:25.041023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:17.610 [2024-12-10 22:04:25.058059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:17.610 [2024-12-10 22:04:25.058106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:33:17.610 [2024-12-10 22:04:25.058119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.018 ms 00:33:17.610 [2024-12-10 22:04:25.058128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:17.610 [2024-12-10 22:04:25.058919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:17.610 [2024-12-10 22:04:25.058952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:33:17.610 [2024-12-10 22:04:25.058965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.694 ms 00:33:17.610 [2024-12-10 22:04:25.058975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:17.610 [2024-12-10 22:04:25.142187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:17.610 [2024-12-10 22:04:25.142245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:33:17.610 [2024-12-10 22:04:25.142261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 83.322 ms 00:33:17.610 [2024-12-10 22:04:25.142272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:17.610 [2024-12-10 22:04:25.152350] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:33:17.610 [2024-12-10 22:04:25.152973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:17.610 [2024-12-10 22:04:25.153001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:33:17.610 [2024-12-10 22:04:25.153014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.671 ms 00:33:17.610 [2024-12-10 22:04:25.153025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:17.610 [2024-12-10 22:04:25.153132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:17.610 [2024-12-10 22:04:25.153150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:33:17.610 [2024-12-10 22:04:25.153163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:33:17.610 [2024-12-10 22:04:25.153174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:17.610 [2024-12-10 22:04:25.153243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:17.610 [2024-12-10 22:04:25.153256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:33:17.610 [2024-12-10 22:04:25.153267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:33:17.610 [2024-12-10 22:04:25.153277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:17.610 [2024-12-10 22:04:25.153301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:17.611 [2024-12-10 22:04:25.153311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:33:17.611 [2024-12-10 22:04:25.153326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:17.611 [2024-12-10 22:04:25.153337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:17.611 [2024-12-10 22:04:25.153376] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:33:17.611 [2024-12-10 22:04:25.153388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:17.611 [2024-12-10 22:04:25.153398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:33:17.611 [2024-12-10 22:04:25.153409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:33:17.611 [2024-12-10 22:04:25.153419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:17.611 [2024-12-10 22:04:25.186666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:17.611 [2024-12-10 22:04:25.186706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:33:17.611 [2024-12-10 22:04:25.186719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.277 ms 00:33:17.611 [2024-12-10 22:04:25.186730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:17.611 [2024-12-10 22:04:25.186802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:17.611 [2024-12-10 22:04:25.186814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:33:17.611 [2024-12-10 22:04:25.186825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:33:17.611 [2024-12-10 22:04:25.186834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:17.611 [2024-12-10 22:04:25.187976] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4202.492 ms, result 0 00:33:17.611 [2024-12-10 22:04:25.202986] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:17.611 [2024-12-10 22:04:25.218978] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:33:17.611 [2024-12-10 22:04:25.227642] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:17.868 22:04:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:17.868 22:04:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:33:17.868 22:04:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:17.868 22:04:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:33:17.868 22:04:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:33:18.127 [2024-12-10 22:04:25.667177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.127 [2024-12-10 22:04:25.667214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:33:18.127 [2024-12-10 22:04:25.667231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:33:18.127 [2024-12-10 22:04:25.667241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.127 [2024-12-10 22:04:25.667262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.127 [2024-12-10 22:04:25.667273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:33:18.127 [2024-12-10 22:04:25.667283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:18.127 [2024-12-10 22:04:25.667293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.127 [2024-12-10 22:04:25.667311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:18.127 [2024-12-10 22:04:25.667321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:33:18.127 [2024-12-10 22:04:25.667331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:18.127 [2024-12-10 22:04:25.667340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:18.127 [2024-12-10 22:04:25.667392] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.207 ms, result 0 00:33:18.127 true 00:33:18.127 22:04:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:18.385 { 00:33:18.385 "name": "ftl", 00:33:18.385 "properties": [ 00:33:18.385 { 00:33:18.385 "name": "superblock_version", 00:33:18.385 "value": 5, 00:33:18.385 "read-only": true 00:33:18.385 }, 00:33:18.385 { 00:33:18.385 "name": "base_device", 00:33:18.385 "bands": [ 00:33:18.385 { 00:33:18.385 "id": 0, 00:33:18.385 "state": "CLOSED", 00:33:18.385 "validity": 1.0 00:33:18.385 }, 00:33:18.385 { 00:33:18.385 "id": 1, 00:33:18.385 "state": "CLOSED", 00:33:18.385 "validity": 1.0 00:33:18.385 }, 00:33:18.385 { 00:33:18.385 "id": 2, 00:33:18.385 "state": "CLOSED", 00:33:18.385 "validity": 0.007843137254901933 00:33:18.385 }, 00:33:18.385 { 00:33:18.385 "id": 3, 00:33:18.385 "state": "FREE", 00:33:18.385 "validity": 0.0 00:33:18.385 }, 00:33:18.385 { 00:33:18.385 "id": 4, 00:33:18.385 "state": "FREE", 00:33:18.385 "validity": 0.0 00:33:18.385 }, 00:33:18.385 { 00:33:18.385 "id": 5, 00:33:18.385 "state": "FREE", 00:33:18.385 "validity": 0.0 00:33:18.385 }, 00:33:18.385 { 00:33:18.385 "id": 6, 00:33:18.385 "state": "FREE", 00:33:18.385 "validity": 0.0 00:33:18.385 }, 00:33:18.385 { 00:33:18.385 "id": 7, 00:33:18.385 "state": "FREE", 00:33:18.385 "validity": 0.0 00:33:18.385 }, 00:33:18.385 { 00:33:18.385 "id": 8, 00:33:18.385 "state": "FREE", 00:33:18.385 "validity": 0.0 00:33:18.385 }, 00:33:18.385 { 00:33:18.385 "id": 9, 00:33:18.385 "state": "FREE", 00:33:18.386 "validity": 0.0 00:33:18.386 }, 00:33:18.386 { 00:33:18.386 "id": 10, 00:33:18.386 "state": "FREE", 00:33:18.386 "validity": 0.0 00:33:18.386 }, 00:33:18.386 { 00:33:18.386 "id": 11, 00:33:18.386 "state": "FREE", 00:33:18.386 "validity": 0.0 00:33:18.386 }, 00:33:18.386 { 00:33:18.386 "id": 12, 00:33:18.386 "state": "FREE", 00:33:18.386 "validity": 0.0 00:33:18.386 }, 00:33:18.386 { 00:33:18.386 "id": 13, 00:33:18.386 "state": "FREE", 00:33:18.386 "validity": 0.0 00:33:18.386 }, 00:33:18.386 { 00:33:18.386 "id": 14, 00:33:18.386 "state": "FREE", 00:33:18.386 "validity": 0.0 00:33:18.386 }, 00:33:18.386 { 00:33:18.386 "id": 15, 00:33:18.386 "state": "FREE", 00:33:18.386 "validity": 0.0 00:33:18.386 }, 00:33:18.386 { 00:33:18.386 "id": 16, 00:33:18.386 "state": "FREE", 00:33:18.386 "validity": 0.0 00:33:18.386 }, 00:33:18.386 { 00:33:18.386 "id": 17, 00:33:18.386 "state": "FREE", 00:33:18.386 "validity": 0.0 00:33:18.386 } 00:33:18.386 ], 00:33:18.386 "read-only": true 00:33:18.386 }, 00:33:18.386 { 00:33:18.386 "name": "cache_device", 00:33:18.386 "type": "bdev", 00:33:18.386 "chunks": [ 00:33:18.386 { 00:33:18.386 "id": 0, 00:33:18.386 "state": "INACTIVE", 00:33:18.386 "utilization": 0.0 00:33:18.386 }, 00:33:18.386 { 00:33:18.386 "id": 1, 00:33:18.386 "state": "OPEN", 00:33:18.386 "utilization": 0.0 00:33:18.386 }, 00:33:18.386 { 00:33:18.386 "id": 2, 00:33:18.386 "state": "OPEN", 00:33:18.386 "utilization": 0.0 00:33:18.386 }, 00:33:18.386 { 00:33:18.386 "id": 3, 00:33:18.386 "state": "FREE", 00:33:18.386 "utilization": 0.0 00:33:18.386 }, 00:33:18.386 { 00:33:18.386 "id": 4, 00:33:18.386 "state": "FREE", 00:33:18.386 "utilization": 0.0 00:33:18.386 } 00:33:18.386 ], 00:33:18.386 "read-only": true 00:33:18.386 }, 00:33:18.386 { 00:33:18.386 "name": "verbose_mode", 00:33:18.386 "value": true, 00:33:18.386 "unit": "", 00:33:18.386 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:33:18.386 }, 00:33:18.386 { 00:33:18.386 "name": "prep_upgrade_on_shutdown", 00:33:18.386 "value": false, 00:33:18.386 "unit": "", 00:33:18.386 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:33:18.386 } 00:33:18.386 ] 00:33:18.386 } 00:33:18.386 22:04:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:33:18.386 22:04:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:33:18.386 22:04:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:18.644 22:04:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:33:18.644 22:04:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:33:18.644 22:04:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:33:18.644 22:04:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:18.644 22:04:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:33:18.644 22:04:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:33:18.644 22:04:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:33:18.644 22:04:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:33:18.644 22:04:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:33:18.644 22:04:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:33:18.644 22:04:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:18.644 Validate MD5 checksum, iteration 1 00:33:18.644 22:04:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:33:18.644 22:04:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:18.644 22:04:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:18.644 22:04:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:18.644 22:04:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:18.644 22:04:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:18.644 22:04:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:18.903 [2024-12-10 22:04:26.413209] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:33:18.903 [2024-12-10 22:04:26.413329] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85918 ] 00:33:18.903 [2024-12-10 22:04:26.595882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:19.161 [2024-12-10 22:04:26.719013] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:33:21.067  [2024-12-10T22:04:29.095Z] Copying: 639/1024 [MB] (639 MBps) [2024-12-10T22:04:31.033Z] Copying: 1024/1024 [MB] (average 627 MBps) 00:33:23.302 00:33:23.302 22:04:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:33:23.302 22:04:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:24.680 22:04:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:24.680 22:04:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ba39025bb6dbbc9b981b9e0f8bb83a4b 00:33:24.680 22:04:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ba39025bb6dbbc9b981b9e0f8bb83a4b != \b\a\3\9\0\2\5\b\b\6\d\b\b\c\9\b\9\8\1\b\9\e\0\f\8\b\b\8\3\a\4\b ]] 00:33:24.680 22:04:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:24.680 22:04:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:24.680 Validate MD5 checksum, iteration 2 00:33:24.680 22:04:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:33:24.680 22:04:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:24.680 22:04:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:24.680 22:04:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:24.680 22:04:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:24.680 22:04:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:24.680 22:04:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:24.939 [2024-12-10 22:04:32.462022] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:33:24.939 [2024-12-10 22:04:32.462369] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85989 ] 00:33:24.939 [2024-12-10 22:04:32.653018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:25.199 [2024-12-10 22:04:32.768009] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:33:27.104  [2024-12-10T22:04:35.094Z] Copying: 622/1024 [MB] (622 MBps) [2024-12-10T22:04:37.001Z] Copying: 1024/1024 [MB] (average 628 MBps) 00:33:29.270 00:33:29.270 22:04:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:33:29.270 22:04:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:31.176 22:04:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:31.176 22:04:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=2dfc4b9fdd9e5d2ec5f50975a0983330 00:33:31.176 22:04:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 2dfc4b9fdd9e5d2ec5f50975a0983330 != \2\d\f\c\4\b\9\f\d\d\9\e\5\d\2\e\c\5\f\5\0\9\7\5\a\0\9\8\3\3\3\0 ]] 00:33:31.176 22:04:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:31.176 22:04:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:31.176 22:04:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:33:31.176 22:04:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 85832 ]] 00:33:31.176 22:04:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 85832 00:33:31.176 22:04:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:33:31.176 22:04:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:33:31.176 22:04:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:33:31.176 22:04:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:33:31.176 22:04:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:31.176 22:04:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:31.176 22:04:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=86055 00:33:31.176 22:04:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:33:31.176 22:04:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 86055 00:33:31.176 22:04:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 86055 ']' 00:33:31.177 22:04:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:31.177 22:04:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:31.177 22:04:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:31.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:31.177 22:04:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:31.177 22:04:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:31.177 [2024-12-10 22:04:38.512962] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:33:31.177 [2024-12-10 22:04:38.513303] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86055 ] 00:33:31.177 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 85832 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:33:31.177 [2024-12-10 22:04:38.694163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:31.177 [2024-12-10 22:04:38.805960] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:32.114 [2024-12-10 22:04:39.777991] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:32.114 [2024-12-10 22:04:39.778078] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:32.374 [2024-12-10 22:04:39.924487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.374 [2024-12-10 22:04:39.924534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:33:32.374 [2024-12-10 22:04:39.924550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:33:32.374 [2024-12-10 22:04:39.924559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.374 [2024-12-10 22:04:39.924615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.374 [2024-12-10 22:04:39.924627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:32.374 [2024-12-10 22:04:39.924637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:33:32.374 [2024-12-10 22:04:39.924646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.374 [2024-12-10 22:04:39.924675] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:33:32.374 [2024-12-10 22:04:39.925637] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:33:32.374 [2024-12-10 22:04:39.925662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.374 [2024-12-10 22:04:39.925673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:32.374 [2024-12-10 22:04:39.925684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.999 ms 00:33:32.374 [2024-12-10 22:04:39.925694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.374 [2024-12-10 22:04:39.926046] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:33:32.374 [2024-12-10 22:04:39.950764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.374 [2024-12-10 22:04:39.950929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:33:32.374 [2024-12-10 22:04:39.950951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.759 ms 00:33:32.374 [2024-12-10 22:04:39.950962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.374 [2024-12-10 22:04:39.964327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.374 [2024-12-10 22:04:39.964366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:33:32.374 [2024-12-10 22:04:39.964378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:33:32.374 [2024-12-10 22:04:39.964388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.374 [2024-12-10 22:04:39.964889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.374 [2024-12-10 22:04:39.964916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:32.374 [2024-12-10 22:04:39.964928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.424 ms 00:33:32.374 [2024-12-10 22:04:39.964939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.374 [2024-12-10 22:04:39.965003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.375 [2024-12-10 22:04:39.965017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:32.375 [2024-12-10 22:04:39.965028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:33:32.375 [2024-12-10 22:04:39.965039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.375 [2024-12-10 22:04:39.965085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.375 [2024-12-10 22:04:39.965097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:33:32.375 [2024-12-10 22:04:39.965107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:33:32.375 [2024-12-10 22:04:39.965117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.375 [2024-12-10 22:04:39.965143] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:33:32.375 [2024-12-10 22:04:39.968808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.375 [2024-12-10 22:04:39.968935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:32.375 [2024-12-10 22:04:39.968955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.676 ms 00:33:32.375 [2024-12-10 22:04:39.968980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.375 [2024-12-10 22:04:39.969019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.375 [2024-12-10 22:04:39.969031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:33:32.375 [2024-12-10 22:04:39.969043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:32.375 [2024-12-10 22:04:39.969053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.375 [2024-12-10 22:04:39.969105] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:33:32.375 [2024-12-10 22:04:39.969131] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:33:32.375 [2024-12-10 22:04:39.969165] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:33:32.375 [2024-12-10 22:04:39.969187] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:33:32.375 [2024-12-10 22:04:39.969277] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:33:32.375 [2024-12-10 22:04:39.969291] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:33:32.375 [2024-12-10 22:04:39.969304] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:33:32.375 [2024-12-10 22:04:39.969317] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:33:32.375 [2024-12-10 22:04:39.969329] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:33:32.375 [2024-12-10 22:04:39.969340] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:33:32.375 [2024-12-10 22:04:39.969351] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:33:32.375 [2024-12-10 22:04:39.969360] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:33:32.375 [2024-12-10 22:04:39.969370] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:33:32.375 [2024-12-10 22:04:39.969381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.375 [2024-12-10 22:04:39.969395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:33:32.375 [2024-12-10 22:04:39.969406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.278 ms 00:33:32.375 [2024-12-10 22:04:39.969415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.375 [2024-12-10 22:04:39.969487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.375 [2024-12-10 22:04:39.969498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:33:32.375 [2024-12-10 22:04:39.969509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:33:32.375 [2024-12-10 22:04:39.969519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.375 [2024-12-10 22:04:39.969606] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:33:32.375 [2024-12-10 22:04:39.969618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:33:32.375 [2024-12-10 22:04:39.969632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:32.375 [2024-12-10 22:04:39.969643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:32.375 [2024-12-10 22:04:39.969653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:33:32.375 [2024-12-10 22:04:39.969663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:33:32.375 [2024-12-10 22:04:39.969673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:33:32.375 [2024-12-10 22:04:39.969683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:33:32.375 [2024-12-10 22:04:39.969693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:33:32.375 [2024-12-10 22:04:39.969703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:32.375 [2024-12-10 22:04:39.969712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:33:32.375 [2024-12-10 22:04:39.969722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:33:32.375 [2024-12-10 22:04:39.969731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:32.375 [2024-12-10 22:04:39.969740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:33:32.375 [2024-12-10 22:04:39.969749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:33:32.375 [2024-12-10 22:04:39.969758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:32.375 [2024-12-10 22:04:39.969768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:33:32.375 [2024-12-10 22:04:39.969777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:33:32.375 [2024-12-10 22:04:39.969788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:32.375 [2024-12-10 22:04:39.969798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:33:32.375 [2024-12-10 22:04:39.969807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:33:32.375 [2024-12-10 22:04:39.969827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:32.375 [2024-12-10 22:04:39.969837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:33:32.375 [2024-12-10 22:04:39.969847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:33:32.375 [2024-12-10 22:04:39.969856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:32.375 [2024-12-10 22:04:39.969865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:33:32.375 [2024-12-10 22:04:39.969875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:33:32.375 [2024-12-10 22:04:39.969885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:32.375 [2024-12-10 22:04:39.969894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:33:32.375 [2024-12-10 22:04:39.969903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:33:32.375 [2024-12-10 22:04:39.969913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:32.375 [2024-12-10 22:04:39.969923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:33:32.375 [2024-12-10 22:04:39.969932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:33:32.375 [2024-12-10 22:04:39.969942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:32.375 [2024-12-10 22:04:39.969951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:33:32.375 [2024-12-10 22:04:39.969960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:33:32.375 [2024-12-10 22:04:39.969969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:32.375 [2024-12-10 22:04:39.969979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:33:32.375 [2024-12-10 22:04:39.969989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:33:32.375 [2024-12-10 22:04:39.969998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:32.375 [2024-12-10 22:04:39.970007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:33:32.375 [2024-12-10 22:04:39.970016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:33:32.375 [2024-12-10 22:04:39.970025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:32.375 [2024-12-10 22:04:39.970034] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:33:32.375 [2024-12-10 22:04:39.970046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:33:32.375 [2024-12-10 22:04:39.970068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:32.375 [2024-12-10 22:04:39.970078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:32.375 [2024-12-10 22:04:39.970089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:33:32.375 [2024-12-10 22:04:39.970099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:33:32.375 [2024-12-10 22:04:39.970109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:33:32.375 [2024-12-10 22:04:39.970120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:33:32.375 [2024-12-10 22:04:39.970130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:33:32.375 [2024-12-10 22:04:39.970140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:33:32.375 [2024-12-10 22:04:39.970151] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:33:32.375 [2024-12-10 22:04:39.970164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:32.375 [2024-12-10 22:04:39.970177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:33:32.375 [2024-12-10 22:04:39.970188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:33:32.375 [2024-12-10 22:04:39.970199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:33:32.375 [2024-12-10 22:04:39.970211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:33:32.375 [2024-12-10 22:04:39.970222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:33:32.375 [2024-12-10 22:04:39.970238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:33:32.375 [2024-12-10 22:04:39.970249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:33:32.376 [2024-12-10 22:04:39.970259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:33:32.376 [2024-12-10 22:04:39.970270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:33:32.376 [2024-12-10 22:04:39.970281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:33:32.376 [2024-12-10 22:04:39.970291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:33:32.376 [2024-12-10 22:04:39.970301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:33:32.376 [2024-12-10 22:04:39.970311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:33:32.376 [2024-12-10 22:04:39.970321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:33:32.376 [2024-12-10 22:04:39.970331] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:33:32.376 [2024-12-10 22:04:39.970342] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:32.376 [2024-12-10 22:04:39.970358] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:32.376 [2024-12-10 22:04:39.970368] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:33:32.376 [2024-12-10 22:04:39.970378] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:33:32.376 [2024-12-10 22:04:39.970389] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:33:32.376 [2024-12-10 22:04:39.970410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.376 [2024-12-10 22:04:39.970421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:33:32.376 [2024-12-10 22:04:39.970430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.860 ms 00:33:32.376 [2024-12-10 22:04:39.970440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.376 [2024-12-10 22:04:40.007626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.376 [2024-12-10 22:04:40.007665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:32.376 [2024-12-10 22:04:40.007680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.172 ms 00:33:32.376 [2024-12-10 22:04:40.007692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.376 [2024-12-10 22:04:40.007731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.376 [2024-12-10 22:04:40.007742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:33:32.376 [2024-12-10 22:04:40.007754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:33:32.376 [2024-12-10 22:04:40.007764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.376 [2024-12-10 22:04:40.054499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.376 [2024-12-10 22:04:40.054691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:32.376 [2024-12-10 22:04:40.054712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.751 ms 00:33:32.376 [2024-12-10 22:04:40.054723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.376 [2024-12-10 22:04:40.054761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.376 [2024-12-10 22:04:40.054773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:32.376 [2024-12-10 22:04:40.054785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:32.376 [2024-12-10 22:04:40.054802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.376 [2024-12-10 22:04:40.054934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.376 [2024-12-10 22:04:40.054948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:32.376 [2024-12-10 22:04:40.054960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:33:32.376 [2024-12-10 22:04:40.054970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.376 [2024-12-10 22:04:40.055012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.376 [2024-12-10 22:04:40.055024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:32.376 [2024-12-10 22:04:40.055034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:33:32.376 [2024-12-10 22:04:40.055044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.376 [2024-12-10 22:04:40.076250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.376 [2024-12-10 22:04:40.076285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:32.376 [2024-12-10 22:04:40.076298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.191 ms 00:33:32.376 [2024-12-10 22:04:40.076312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.376 [2024-12-10 22:04:40.076425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.376 [2024-12-10 22:04:40.076440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:33:32.376 [2024-12-10 22:04:40.076451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:33:32.376 [2024-12-10 22:04:40.076462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.635 [2024-12-10 22:04:40.109668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.635 [2024-12-10 22:04:40.109706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:33:32.635 [2024-12-10 22:04:40.109720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.234 ms 00:33:32.635 [2024-12-10 22:04:40.109746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.635 [2024-12-10 22:04:40.123944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.635 [2024-12-10 22:04:40.124096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:33:32.635 [2024-12-10 22:04:40.124145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.542 ms 00:33:32.635 [2024-12-10 22:04:40.124156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.635 [2024-12-10 22:04:40.206459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.635 [2024-12-10 22:04:40.206536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:33:32.635 [2024-12-10 22:04:40.206553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 82.370 ms 00:33:32.635 [2024-12-10 22:04:40.206564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.635 [2024-12-10 22:04:40.206745] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:33:32.635 [2024-12-10 22:04:40.206873] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:33:32.635 [2024-12-10 22:04:40.206992] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:33:32.635 [2024-12-10 22:04:40.207132] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:33:32.635 [2024-12-10 22:04:40.207147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.635 [2024-12-10 22:04:40.207158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:33:32.635 [2024-12-10 22:04:40.207169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.536 ms 00:33:32.635 [2024-12-10 22:04:40.207181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.635 [2024-12-10 22:04:40.207286] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:33:32.635 [2024-12-10 22:04:40.207301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.635 [2024-12-10 22:04:40.207316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:33:32.636 [2024-12-10 22:04:40.207328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:33:32.636 [2024-12-10 22:04:40.207339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.636 [2024-12-10 22:04:40.228745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.636 [2024-12-10 22:04:40.228918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:33:32.636 [2024-12-10 22:04:40.228941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.414 ms 00:33:32.636 [2024-12-10 22:04:40.228953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.636 [2024-12-10 22:04:40.241958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.636 [2024-12-10 22:04:40.241993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:33:32.636 [2024-12-10 22:04:40.242006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:33:32.636 [2024-12-10 22:04:40.242017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:32.636 [2024-12-10 22:04:40.242127] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:33:32.636 [2024-12-10 22:04:40.242336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:32.636 [2024-12-10 22:04:40.242347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:33:32.636 [2024-12-10 22:04:40.242358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.211 ms 00:33:32.636 [2024-12-10 22:04:40.242368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.203 [2024-12-10 22:04:40.836416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.203 [2024-12-10 22:04:40.836483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:33:33.203 [2024-12-10 22:04:40.836502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 593.875 ms 00:33:33.204 [2024-12-10 22:04:40.836513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.204 [2024-12-10 22:04:40.842218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.204 [2024-12-10 22:04:40.842374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:33:33.204 [2024-12-10 22:04:40.842396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.121 ms 00:33:33.204 [2024-12-10 22:04:40.842408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.204 [2024-12-10 22:04:40.842963] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:33:33.204 [2024-12-10 22:04:40.842992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.204 [2024-12-10 22:04:40.843004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:33:33.204 [2024-12-10 22:04:40.843017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.540 ms 00:33:33.204 [2024-12-10 22:04:40.843028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.204 [2024-12-10 22:04:40.843070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.204 [2024-12-10 22:04:40.843083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:33:33.204 [2024-12-10 22:04:40.843095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:33.204 [2024-12-10 22:04:40.843111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.204 [2024-12-10 22:04:40.843147] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 601.997 ms, result 0 00:33:33.204 [2024-12-10 22:04:40.843191] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:33:33.204 [2024-12-10 22:04:40.843270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.204 [2024-12-10 22:04:40.843281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:33:33.204 [2024-12-10 22:04:40.843292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.081 ms 00:33:33.204 [2024-12-10 22:04:40.843301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.772 [2024-12-10 22:04:41.425459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.772 [2024-12-10 22:04:41.425740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:33:33.772 [2024-12-10 22:04:41.425788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 581.922 ms 00:33:33.772 [2024-12-10 22:04:41.425799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.772 [2024-12-10 22:04:41.431610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.772 [2024-12-10 22:04:41.431656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:33:33.772 [2024-12-10 22:04:41.431670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.297 ms 00:33:33.772 [2024-12-10 22:04:41.431681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.772 [2024-12-10 22:04:41.432097] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:33:33.772 [2024-12-10 22:04:41.432124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.772 [2024-12-10 22:04:41.432135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:33:33.772 [2024-12-10 22:04:41.432147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.411 ms 00:33:33.772 [2024-12-10 22:04:41.432158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.772 [2024-12-10 22:04:41.432191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.772 [2024-12-10 22:04:41.432205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:33:33.773 [2024-12-10 22:04:41.432216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:33.773 [2024-12-10 22:04:41.432226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.773 [2024-12-10 22:04:41.432266] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 590.028 ms, result 0 00:33:33.773 [2024-12-10 22:04:41.432310] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:33.773 [2024-12-10 22:04:41.432324] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:33:33.773 [2024-12-10 22:04:41.432337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.773 [2024-12-10 22:04:41.432348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:33:33.773 [2024-12-10 22:04:41.432360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1192.167 ms 00:33:33.773 [2024-12-10 22:04:41.432370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.773 [2024-12-10 22:04:41.432403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.773 [2024-12-10 22:04:41.432421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:33:33.773 [2024-12-10 22:04:41.432433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:33.773 [2024-12-10 22:04:41.432443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.773 [2024-12-10 22:04:41.443553] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:33:33.773 [2024-12-10 22:04:41.443819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.773 [2024-12-10 22:04:41.443867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:33:33.773 [2024-12-10 22:04:41.443952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.375 ms 00:33:33.773 [2024-12-10 22:04:41.443988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.773 [2024-12-10 22:04:41.444615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.773 [2024-12-10 22:04:41.444740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:33:33.773 [2024-12-10 22:04:41.444832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.504 ms 00:33:33.773 [2024-12-10 22:04:41.444867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.773 [2024-12-10 22:04:41.446949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.773 [2024-12-10 22:04:41.447089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:33:33.773 [2024-12-10 22:04:41.447174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.038 ms 00:33:33.773 [2024-12-10 22:04:41.447213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.773 [2024-12-10 22:04:41.447282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.773 [2024-12-10 22:04:41.447445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:33:33.773 [2024-12-10 22:04:41.447483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:33.773 [2024-12-10 22:04:41.447519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.773 [2024-12-10 22:04:41.447649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.773 [2024-12-10 22:04:41.447805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:33:33.773 [2024-12-10 22:04:41.447843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:33:33.773 [2024-12-10 22:04:41.447873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.773 [2024-12-10 22:04:41.447919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.773 [2024-12-10 22:04:41.447999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:33:33.773 [2024-12-10 22:04:41.448035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:33:33.773 [2024-12-10 22:04:41.448077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.773 [2024-12-10 22:04:41.448148] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:33:33.773 [2024-12-10 22:04:41.448219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.773 [2024-12-10 22:04:41.448252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:33:33.773 [2024-12-10 22:04:41.448283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:33:33.773 [2024-12-10 22:04:41.448313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.773 [2024-12-10 22:04:41.448407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.773 [2024-12-10 22:04:41.448492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:33:33.773 [2024-12-10 22:04:41.448528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:33:33.773 [2024-12-10 22:04:41.448557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.773 [2024-12-10 22:04:41.449587] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1527.074 ms, result 0 00:33:33.773 [2024-12-10 22:04:41.464363] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:33.773 [2024-12-10 22:04:41.480364] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:33:33.773 [2024-12-10 22:04:41.490582] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:34.033 22:04:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:34.033 Validate MD5 checksum, iteration 1 00:33:34.033 22:04:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:33:34.033 22:04:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:34.033 22:04:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:33:34.033 22:04:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:33:34.033 22:04:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:33:34.033 22:04:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:33:34.033 22:04:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:34.033 22:04:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:33:34.033 22:04:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:34.033 22:04:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:34.033 22:04:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:34.033 22:04:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:34.033 22:04:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:34.033 22:04:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:34.033 [2024-12-10 22:04:41.633342] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:33:34.033 [2024-12-10 22:04:41.633660] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86091 ] 00:33:34.292 [2024-12-10 22:04:41.812419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:34.292 [2024-12-10 22:04:41.934376] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:33:36.198  [2024-12-10T22:04:44.498Z] Copying: 645/1024 [MB] (645 MBps) [2024-12-10T22:04:47.787Z] Copying: 1024/1024 [MB] (average 641 MBps) 00:33:40.056 00:33:40.056 22:04:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:33:40.056 22:04:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:41.433 22:04:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:41.433 Validate MD5 checksum, iteration 2 00:33:41.433 22:04:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ba39025bb6dbbc9b981b9e0f8bb83a4b 00:33:41.433 22:04:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ba39025bb6dbbc9b981b9e0f8bb83a4b != \b\a\3\9\0\2\5\b\b\6\d\b\b\c\9\b\9\8\1\b\9\e\0\f\8\b\b\8\3\a\4\b ]] 00:33:41.433 22:04:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:41.433 22:04:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:41.433 22:04:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:33:41.433 22:04:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:41.433 22:04:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:41.433 22:04:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:41.433 22:04:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:41.433 22:04:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:41.433 22:04:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:41.433 [2024-12-10 22:04:49.147082] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:33:41.433 [2024-12-10 22:04:49.147397] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86170 ] 00:33:41.693 [2024-12-10 22:04:49.328605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:41.952 [2024-12-10 22:04:49.450940] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:33:43.856  [2024-12-10T22:04:51.846Z] Copying: 636/1024 [MB] (636 MBps) [2024-12-10T22:04:53.225Z] Copying: 1024/1024 [MB] (average 642 MBps) 00:33:45.494 00:33:45.494 22:04:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:33:45.494 22:04:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:47.399 22:04:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:47.399 22:04:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=2dfc4b9fdd9e5d2ec5f50975a0983330 00:33:47.399 22:04:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 2dfc4b9fdd9e5d2ec5f50975a0983330 != \2\d\f\c\4\b\9\f\d\d\9\e\5\d\2\e\c\5\f\5\0\9\7\5\a\0\9\8\3\3\3\0 ]] 00:33:47.399 22:04:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:47.399 22:04:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:47.399 22:04:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:33:47.399 22:04:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:33:47.399 22:04:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:33:47.399 22:04:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:47.399 22:04:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:33:47.399 22:04:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:33:47.399 22:04:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:33:47.399 22:04:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:33:47.399 22:04:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 86055 ]] 00:33:47.399 22:04:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 86055 00:33:47.399 22:04:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 86055 ']' 00:33:47.399 22:04:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 86055 00:33:47.399 22:04:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:33:47.399 22:04:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:47.399 22:04:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86055 00:33:47.399 killing process with pid 86055 00:33:47.399 22:04:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:47.399 22:04:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:47.399 22:04:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86055' 00:33:47.399 22:04:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 86055 00:33:47.399 22:04:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 86055 00:33:48.777 [2024-12-10 22:04:56.148369] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:33:48.777 [2024-12-10 22:04:56.168506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:48.777 [2024-12-10 22:04:56.168547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:33:48.777 [2024-12-10 22:04:56.168563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:48.777 [2024-12-10 22:04:56.168574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:48.777 [2024-12-10 22:04:56.168597] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:33:48.777 [2024-12-10 22:04:56.172529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:48.777 [2024-12-10 22:04:56.172561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:33:48.777 [2024-12-10 22:04:56.172579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.921 ms 00:33:48.777 [2024-12-10 22:04:56.172590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:48.777 [2024-12-10 22:04:56.172795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:48.777 [2024-12-10 22:04:56.172808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:33:48.777 [2024-12-10 22:04:56.172819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.175 ms 00:33:48.777 [2024-12-10 22:04:56.172829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:48.777 [2024-12-10 22:04:56.174100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:48.777 [2024-12-10 22:04:56.174137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:33:48.777 [2024-12-10 22:04:56.174150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.255 ms 00:33:48.777 [2024-12-10 22:04:56.174166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:48.777 [2024-12-10 22:04:56.175108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:48.777 [2024-12-10 22:04:56.175310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:33:48.777 [2024-12-10 22:04:56.175331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.907 ms 00:33:48.777 [2024-12-10 22:04:56.175342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:48.777 [2024-12-10 22:04:56.189157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:48.777 [2024-12-10 22:04:56.189325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:33:48.777 [2024-12-10 22:04:56.189348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.791 ms 00:33:48.777 [2024-12-10 22:04:56.189365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:48.777 [2024-12-10 22:04:56.197095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:48.777 [2024-12-10 22:04:56.197130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:33:48.777 [2024-12-10 22:04:56.197144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.678 ms 00:33:48.777 [2024-12-10 22:04:56.197154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:48.777 [2024-12-10 22:04:56.197241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:48.777 [2024-12-10 22:04:56.197253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:33:48.777 [2024-12-10 22:04:56.197264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:33:48.777 [2024-12-10 22:04:56.197279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:48.777 [2024-12-10 22:04:56.212140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:48.777 [2024-12-10 22:04:56.212172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:33:48.777 [2024-12-10 22:04:56.212184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.866 ms 00:33:48.777 [2024-12-10 22:04:56.212194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:48.777 [2024-12-10 22:04:56.226695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:48.777 [2024-12-10 22:04:56.226727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:33:48.777 [2024-12-10 22:04:56.226739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.489 ms 00:33:48.777 [2024-12-10 22:04:56.226749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:48.777 [2024-12-10 22:04:56.240603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:48.777 [2024-12-10 22:04:56.240734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:33:48.777 [2024-12-10 22:04:56.240770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.842 ms 00:33:48.777 [2024-12-10 22:04:56.240781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:48.777 [2024-12-10 22:04:56.254539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:48.777 [2024-12-10 22:04:56.254683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:33:48.777 [2024-12-10 22:04:56.254702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.662 ms 00:33:48.777 [2024-12-10 22:04:56.254712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:48.777 [2024-12-10 22:04:56.254787] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:33:48.777 [2024-12-10 22:04:56.254805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:33:48.777 [2024-12-10 22:04:56.254817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:33:48.777 [2024-12-10 22:04:56.254828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:33:48.777 [2024-12-10 22:04:56.254840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:48.777 [2024-12-10 22:04:56.254851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:48.777 [2024-12-10 22:04:56.254862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:48.777 [2024-12-10 22:04:56.254873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:48.777 [2024-12-10 22:04:56.254885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:48.777 [2024-12-10 22:04:56.254895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:48.777 [2024-12-10 22:04:56.254906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:48.777 [2024-12-10 22:04:56.254916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:48.777 [2024-12-10 22:04:56.254927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:48.777 [2024-12-10 22:04:56.254938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:48.777 [2024-12-10 22:04:56.254948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:48.777 [2024-12-10 22:04:56.254959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:48.777 [2024-12-10 22:04:56.254969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:48.777 [2024-12-10 22:04:56.254980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:48.777 [2024-12-10 22:04:56.254990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:48.777 [2024-12-10 22:04:56.255002] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:33:48.777 [2024-12-10 22:04:56.255013] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 5c1ae8f0-3b39-403a-9320-2fdfd418b7f8 00:33:48.777 [2024-12-10 22:04:56.255023] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:33:48.777 [2024-12-10 22:04:56.255033] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:33:48.777 [2024-12-10 22:04:56.255043] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:33:48.777 [2024-12-10 22:04:56.255069] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:33:48.777 [2024-12-10 22:04:56.255079] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:33:48.777 [2024-12-10 22:04:56.255090] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:33:48.777 [2024-12-10 22:04:56.255105] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:33:48.777 [2024-12-10 22:04:56.255114] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:33:48.777 [2024-12-10 22:04:56.255123] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:33:48.777 [2024-12-10 22:04:56.255134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:48.777 [2024-12-10 22:04:56.255146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:33:48.777 [2024-12-10 22:04:56.255160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.349 ms 00:33:48.778 [2024-12-10 22:04:56.255172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:48.778 [2024-12-10 22:04:56.273556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:48.778 [2024-12-10 22:04:56.273689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:33:48.778 [2024-12-10 22:04:56.273708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.383 ms 00:33:48.778 [2024-12-10 22:04:56.273718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:48.778 [2024-12-10 22:04:56.274293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:48.778 [2024-12-10 22:04:56.274307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:33:48.778 [2024-12-10 22:04:56.274319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.546 ms 00:33:48.778 [2024-12-10 22:04:56.274329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:48.778 [2024-12-10 22:04:56.335029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:48.778 [2024-12-10 22:04:56.335074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:48.778 [2024-12-10 22:04:56.335087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:48.778 [2024-12-10 22:04:56.335101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:48.778 [2024-12-10 22:04:56.335132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:48.778 [2024-12-10 22:04:56.335143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:48.778 [2024-12-10 22:04:56.335154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:48.778 [2024-12-10 22:04:56.335164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:48.778 [2024-12-10 22:04:56.335234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:48.778 [2024-12-10 22:04:56.335247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:48.778 [2024-12-10 22:04:56.335258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:48.778 [2024-12-10 22:04:56.335268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:48.778 [2024-12-10 22:04:56.335290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:48.778 [2024-12-10 22:04:56.335300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:48.778 [2024-12-10 22:04:56.335310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:48.778 [2024-12-10 22:04:56.335320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:48.778 [2024-12-10 22:04:56.455003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:48.778 [2024-12-10 22:04:56.455216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:48.778 [2024-12-10 22:04:56.455241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:48.778 [2024-12-10 22:04:56.455252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.057 [2024-12-10 22:04:56.549681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:49.057 [2024-12-10 22:04:56.549725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:49.057 [2024-12-10 22:04:56.549739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:49.057 [2024-12-10 22:04:56.549749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.057 [2024-12-10 22:04:56.549860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:49.057 [2024-12-10 22:04:56.549872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:49.057 [2024-12-10 22:04:56.549883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:49.057 [2024-12-10 22:04:56.549894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.057 [2024-12-10 22:04:56.549939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:49.057 [2024-12-10 22:04:56.549967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:49.057 [2024-12-10 22:04:56.549978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:49.057 [2024-12-10 22:04:56.549988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.057 [2024-12-10 22:04:56.550144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:49.057 [2024-12-10 22:04:56.550159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:49.057 [2024-12-10 22:04:56.550191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:49.057 [2024-12-10 22:04:56.550202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.057 [2024-12-10 22:04:56.550242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:49.057 [2024-12-10 22:04:56.550254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:33:49.057 [2024-12-10 22:04:56.550270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:49.057 [2024-12-10 22:04:56.550280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.057 [2024-12-10 22:04:56.550321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:49.057 [2024-12-10 22:04:56.550332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:49.057 [2024-12-10 22:04:56.550343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:49.057 [2024-12-10 22:04:56.550352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.057 [2024-12-10 22:04:56.550397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:49.057 [2024-12-10 22:04:56.550412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:49.057 [2024-12-10 22:04:56.550423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:49.057 [2024-12-10 22:04:56.550432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.057 [2024-12-10 22:04:56.550588] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 382.662 ms, result 0 00:33:50.448 22:04:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:33:50.448 22:04:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:50.448 22:04:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:33:50.448 22:04:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:33:50.448 22:04:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:33:50.448 22:04:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:50.448 Remove shared memory files 00:33:50.449 22:04:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:33:50.449 22:04:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:50.449 22:04:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:33:50.449 22:04:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:33:50.449 22:04:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid85832 00:33:50.449 22:04:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:50.449 22:04:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:33:50.449 ************************************ 00:33:50.449 END TEST ftl_upgrade_shutdown 00:33:50.449 ************************************ 00:33:50.449 00:33:50.449 real 1m30.743s 00:33:50.449 user 2m0.986s 00:33:50.449 sys 0m25.489s 00:33:50.449 22:04:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:50.449 22:04:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:50.449 22:04:57 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:33:50.449 22:04:57 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:33:50.449 22:04:57 ftl -- ftl/ftl.sh@14 -- # killprocess 78120 00:33:50.449 22:04:57 ftl -- common/autotest_common.sh@954 -- # '[' -z 78120 ']' 00:33:50.449 22:04:57 ftl -- common/autotest_common.sh@958 -- # kill -0 78120 00:33:50.449 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78120) - No such process 00:33:50.449 Process with pid 78120 is not found 00:33:50.449 22:04:57 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 78120 is not found' 00:33:50.449 22:04:57 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:33:50.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:50.449 22:04:57 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=86298 00:33:50.449 22:04:57 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:50.449 22:04:57 ftl -- ftl/ftl.sh@20 -- # waitforlisten 86298 00:33:50.449 22:04:57 ftl -- common/autotest_common.sh@835 -- # '[' -z 86298 ']' 00:33:50.449 22:04:57 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:50.449 22:04:57 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:50.449 22:04:57 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:50.449 22:04:57 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:50.449 22:04:57 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:50.449 [2024-12-10 22:04:58.006172] Starting SPDK v25.01-pre git sha1 2104eacf0 / DPDK 24.03.0 initialization... 00:33:50.449 [2024-12-10 22:04:58.007088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86298 ] 00:33:50.708 [2024-12-10 22:04:58.205876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:50.708 [2024-12-10 22:04:58.317337] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:51.645 22:04:59 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:51.645 22:04:59 ftl -- common/autotest_common.sh@868 -- # return 0 00:33:51.645 22:04:59 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:33:51.904 nvme0n1 00:33:51.904 22:04:59 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:33:51.904 22:04:59 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:51.904 22:04:59 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:52.164 22:04:59 ftl -- ftl/common.sh@28 -- # stores=ab9b8b60-a45a-4060-9856-acd3cdfa2061 00:33:52.164 22:04:59 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:33:52.164 22:04:59 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ab9b8b60-a45a-4060-9856-acd3cdfa2061 00:33:52.164 22:04:59 ftl -- ftl/ftl.sh@23 -- # killprocess 86298 00:33:52.164 22:04:59 ftl -- common/autotest_common.sh@954 -- # '[' -z 86298 ']' 00:33:52.164 22:04:59 ftl -- common/autotest_common.sh@958 -- # kill -0 86298 00:33:52.164 22:04:59 ftl -- common/autotest_common.sh@959 -- # uname 00:33:52.164 22:04:59 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:52.164 22:04:59 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86298 00:33:52.423 killing process with pid 86298 00:33:52.423 22:04:59 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:52.423 22:04:59 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:52.423 22:04:59 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86298' 00:33:52.423 22:04:59 ftl -- common/autotest_common.sh@973 -- # kill 86298 00:33:52.423 22:04:59 ftl -- common/autotest_common.sh@978 -- # wait 86298 00:33:54.956 22:05:02 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:54.957 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:54.957 Waiting for block devices as requested 00:33:55.216 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:33:55.216 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:55.216 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:33:55.475 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:34:00.747 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:34:00.747 Remove shared memory files 00:34:00.747 22:05:08 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:34:00.747 22:05:08 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:00.747 22:05:08 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:34:00.747 22:05:08 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:34:00.747 22:05:08 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:34:00.747 22:05:08 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:00.747 22:05:08 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:34:00.747 ************************************ 00:34:00.747 END TEST ftl 00:34:00.747 ************************************ 00:34:00.747 00:34:00.747 real 12m9.217s 00:34:00.747 user 14m49.142s 00:34:00.747 sys 1m39.187s 00:34:00.747 22:05:08 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:00.747 22:05:08 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:00.747 22:05:08 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:00.747 22:05:08 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:34:00.747 22:05:08 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:00.747 22:05:08 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:34:00.747 22:05:08 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:00.747 22:05:08 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:00.747 22:05:08 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:34:00.747 22:05:08 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:34:00.747 22:05:08 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:34:00.747 22:05:08 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:34:00.747 22:05:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:00.747 22:05:08 -- common/autotest_common.sh@10 -- # set +x 00:34:00.747 22:05:08 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:34:00.747 22:05:08 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:34:00.747 22:05:08 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:34:00.747 22:05:08 -- common/autotest_common.sh@10 -- # set +x 00:34:03.283 INFO: APP EXITING 00:34:03.283 INFO: killing all VMs 00:34:03.283 INFO: killing vhost app 00:34:03.283 INFO: EXIT DONE 00:34:03.542 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:04.111 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:34:04.111 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:34:04.111 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:34:04.111 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:34:04.679 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:05.248 Cleaning 00:34:05.248 Removing: /var/run/dpdk/spdk0/config 00:34:05.248 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:05.248 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:05.248 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:05.248 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:05.248 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:05.248 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:05.248 Removing: /var/run/dpdk/spdk0 00:34:05.248 Removing: /var/run/dpdk/spdk_pid58837 00:34:05.248 Removing: /var/run/dpdk/spdk_pid59089 00:34:05.248 Removing: /var/run/dpdk/spdk_pid59329 00:34:05.248 Removing: /var/run/dpdk/spdk_pid59433 00:34:05.248 Removing: /var/run/dpdk/spdk_pid59489 00:34:05.248 Removing: /var/run/dpdk/spdk_pid59628 00:34:05.248 Removing: /var/run/dpdk/spdk_pid59652 00:34:05.248 Removing: /var/run/dpdk/spdk_pid59866 00:34:05.248 Removing: /var/run/dpdk/spdk_pid59985 00:34:05.248 Removing: /var/run/dpdk/spdk_pid60098 00:34:05.248 Removing: /var/run/dpdk/spdk_pid60231 00:34:05.248 Removing: /var/run/dpdk/spdk_pid60339 00:34:05.248 Removing: /var/run/dpdk/spdk_pid60384 00:34:05.248 Removing: /var/run/dpdk/spdk_pid60415 00:34:05.248 Removing: /var/run/dpdk/spdk_pid60491 00:34:05.248 Removing: /var/run/dpdk/spdk_pid60608 00:34:05.248 Removing: /var/run/dpdk/spdk_pid61067 00:34:05.248 Removing: /var/run/dpdk/spdk_pid61146 00:34:05.248 Removing: /var/run/dpdk/spdk_pid61228 00:34:05.248 Removing: /var/run/dpdk/spdk_pid61244 00:34:05.248 Removing: /var/run/dpdk/spdk_pid61403 00:34:05.248 Removing: /var/run/dpdk/spdk_pid61419 00:34:05.248 Removing: /var/run/dpdk/spdk_pid61578 00:34:05.248 Removing: /var/run/dpdk/spdk_pid61600 00:34:05.248 Removing: /var/run/dpdk/spdk_pid61675 00:34:05.248 Removing: /var/run/dpdk/spdk_pid61693 00:34:05.248 Removing: /var/run/dpdk/spdk_pid61762 00:34:05.248 Removing: /var/run/dpdk/spdk_pid61786 00:34:05.248 Removing: /var/run/dpdk/spdk_pid61981 00:34:05.248 Removing: /var/run/dpdk/spdk_pid62023 00:34:05.248 Removing: /var/run/dpdk/spdk_pid62112 00:34:05.248 Removing: /var/run/dpdk/spdk_pid62301 00:34:05.248 Removing: /var/run/dpdk/spdk_pid62400 00:34:05.248 Removing: /var/run/dpdk/spdk_pid62443 00:34:05.248 Removing: /var/run/dpdk/spdk_pid62905 00:34:05.248 Removing: /var/run/dpdk/spdk_pid63003 00:34:05.248 Removing: /var/run/dpdk/spdk_pid63118 00:34:05.248 Removing: /var/run/dpdk/spdk_pid63171 00:34:05.248 Removing: /var/run/dpdk/spdk_pid63202 00:34:05.248 Removing: /var/run/dpdk/spdk_pid63287 00:34:05.248 Removing: /var/run/dpdk/spdk_pid63929 00:34:05.248 Removing: /var/run/dpdk/spdk_pid63976 00:34:05.248 Removing: /var/run/dpdk/spdk_pid64469 00:34:05.248 Removing: /var/run/dpdk/spdk_pid64572 00:34:05.248 Removing: /var/run/dpdk/spdk_pid64688 00:34:05.248 Removing: /var/run/dpdk/spdk_pid64747 00:34:05.248 Removing: /var/run/dpdk/spdk_pid64772 00:34:05.248 Removing: /var/run/dpdk/spdk_pid64803 00:34:05.248 Removing: /var/run/dpdk/spdk_pid66699 00:34:05.248 Removing: /var/run/dpdk/spdk_pid66847 00:34:05.248 Removing: /var/run/dpdk/spdk_pid66857 00:34:05.248 Removing: /var/run/dpdk/spdk_pid66869 00:34:05.507 Removing: /var/run/dpdk/spdk_pid66917 00:34:05.507 Removing: /var/run/dpdk/spdk_pid66921 00:34:05.507 Removing: /var/run/dpdk/spdk_pid66933 00:34:05.507 Removing: /var/run/dpdk/spdk_pid66983 00:34:05.507 Removing: /var/run/dpdk/spdk_pid66987 00:34:05.507 Removing: /var/run/dpdk/spdk_pid66999 00:34:05.507 Removing: /var/run/dpdk/spdk_pid67047 00:34:05.507 Removing: /var/run/dpdk/spdk_pid67051 00:34:05.507 Removing: /var/run/dpdk/spdk_pid67063 00:34:05.508 Removing: /var/run/dpdk/spdk_pid68490 00:34:05.508 Removing: /var/run/dpdk/spdk_pid68609 00:34:05.508 Removing: /var/run/dpdk/spdk_pid70048 00:34:05.508 Removing: /var/run/dpdk/spdk_pid71789 00:34:05.508 Removing: /var/run/dpdk/spdk_pid71873 00:34:05.508 Removing: /var/run/dpdk/spdk_pid71955 00:34:05.508 Removing: /var/run/dpdk/spdk_pid72065 00:34:05.508 Removing: /var/run/dpdk/spdk_pid72162 00:34:05.508 Removing: /var/run/dpdk/spdk_pid72263 00:34:05.508 Removing: /var/run/dpdk/spdk_pid72343 00:34:05.508 Removing: /var/run/dpdk/spdk_pid72424 00:34:05.508 Removing: /var/run/dpdk/spdk_pid72539 00:34:05.508 Removing: /var/run/dpdk/spdk_pid72631 00:34:05.508 Removing: /var/run/dpdk/spdk_pid72732 00:34:05.508 Removing: /var/run/dpdk/spdk_pid72817 00:34:05.508 Removing: /var/run/dpdk/spdk_pid72898 00:34:05.508 Removing: /var/run/dpdk/spdk_pid73008 00:34:05.508 Removing: /var/run/dpdk/spdk_pid73105 00:34:05.508 Removing: /var/run/dpdk/spdk_pid73201 00:34:05.508 Removing: /var/run/dpdk/spdk_pid73286 00:34:05.508 Removing: /var/run/dpdk/spdk_pid73367 00:34:05.508 Removing: /var/run/dpdk/spdk_pid73481 00:34:05.508 Removing: /var/run/dpdk/spdk_pid73579 00:34:05.508 Removing: /var/run/dpdk/spdk_pid73676 00:34:05.508 Removing: /var/run/dpdk/spdk_pid73761 00:34:05.508 Removing: /var/run/dpdk/spdk_pid73843 00:34:05.508 Removing: /var/run/dpdk/spdk_pid73923 00:34:05.508 Removing: /var/run/dpdk/spdk_pid73999 00:34:05.508 Removing: /var/run/dpdk/spdk_pid74112 00:34:05.508 Removing: /var/run/dpdk/spdk_pid74204 00:34:05.508 Removing: /var/run/dpdk/spdk_pid74310 00:34:05.508 Removing: /var/run/dpdk/spdk_pid74384 00:34:05.508 Removing: /var/run/dpdk/spdk_pid74465 00:34:05.508 Removing: /var/run/dpdk/spdk_pid74546 00:34:05.508 Removing: /var/run/dpdk/spdk_pid74620 00:34:05.508 Removing: /var/run/dpdk/spdk_pid74730 00:34:05.508 Removing: /var/run/dpdk/spdk_pid74826 00:34:05.508 Removing: /var/run/dpdk/spdk_pid74980 00:34:05.508 Removing: /var/run/dpdk/spdk_pid75271 00:34:05.508 Removing: /var/run/dpdk/spdk_pid75316 00:34:05.508 Removing: /var/run/dpdk/spdk_pid75774 00:34:05.508 Removing: /var/run/dpdk/spdk_pid75962 00:34:05.508 Removing: /var/run/dpdk/spdk_pid76067 00:34:05.508 Removing: /var/run/dpdk/spdk_pid76179 00:34:05.767 Removing: /var/run/dpdk/spdk_pid76238 00:34:05.767 Removing: /var/run/dpdk/spdk_pid76264 00:34:05.767 Removing: /var/run/dpdk/spdk_pid76570 00:34:05.767 Removing: /var/run/dpdk/spdk_pid76640 00:34:05.767 Removing: /var/run/dpdk/spdk_pid76727 00:34:05.767 Removing: /var/run/dpdk/spdk_pid77165 00:34:05.767 Removing: /var/run/dpdk/spdk_pid77313 00:34:05.767 Removing: /var/run/dpdk/spdk_pid78120 00:34:05.767 Removing: /var/run/dpdk/spdk_pid78263 00:34:05.767 Removing: /var/run/dpdk/spdk_pid78461 00:34:05.767 Removing: /var/run/dpdk/spdk_pid78579 00:34:05.767 Removing: /var/run/dpdk/spdk_pid78925 00:34:05.767 Removing: /var/run/dpdk/spdk_pid79213 00:34:05.767 Removing: /var/run/dpdk/spdk_pid79579 00:34:05.767 Removing: /var/run/dpdk/spdk_pid79785 00:34:05.767 Removing: /var/run/dpdk/spdk_pid79936 00:34:05.767 Removing: /var/run/dpdk/spdk_pid79995 00:34:05.767 Removing: /var/run/dpdk/spdk_pid80144 00:34:05.767 Removing: /var/run/dpdk/spdk_pid80182 00:34:05.767 Removing: /var/run/dpdk/spdk_pid80246 00:34:05.767 Removing: /var/run/dpdk/spdk_pid80469 00:34:05.767 Removing: /var/run/dpdk/spdk_pid80705 00:34:05.767 Removing: /var/run/dpdk/spdk_pid81195 00:34:05.767 Removing: /var/run/dpdk/spdk_pid81670 00:34:05.767 Removing: /var/run/dpdk/spdk_pid82174 00:34:05.767 Removing: /var/run/dpdk/spdk_pid82732 00:34:05.767 Removing: /var/run/dpdk/spdk_pid82880 00:34:05.767 Removing: /var/run/dpdk/spdk_pid82972 00:34:05.767 Removing: /var/run/dpdk/spdk_pid83686 00:34:05.767 Removing: /var/run/dpdk/spdk_pid83761 00:34:05.767 Removing: /var/run/dpdk/spdk_pid84276 00:34:05.767 Removing: /var/run/dpdk/spdk_pid84691 00:34:05.767 Removing: /var/run/dpdk/spdk_pid85229 00:34:05.767 Removing: /var/run/dpdk/spdk_pid85368 00:34:05.767 Removing: /var/run/dpdk/spdk_pid85425 00:34:05.767 Removing: /var/run/dpdk/spdk_pid85503 00:34:05.767 Removing: /var/run/dpdk/spdk_pid85560 00:34:05.767 Removing: /var/run/dpdk/spdk_pid85629 00:34:05.767 Removing: /var/run/dpdk/spdk_pid85832 00:34:05.767 Removing: /var/run/dpdk/spdk_pid85918 00:34:05.767 Removing: /var/run/dpdk/spdk_pid85989 00:34:05.767 Removing: /var/run/dpdk/spdk_pid86055 00:34:05.767 Removing: /var/run/dpdk/spdk_pid86091 00:34:05.767 Removing: /var/run/dpdk/spdk_pid86170 00:34:05.767 Removing: /var/run/dpdk/spdk_pid86298 00:34:05.767 Clean 00:34:06.026 22:05:13 -- common/autotest_common.sh@1453 -- # return 0 00:34:06.026 22:05:13 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:34:06.026 22:05:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:06.026 22:05:13 -- common/autotest_common.sh@10 -- # set +x 00:34:06.026 22:05:13 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:34:06.026 22:05:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:06.026 22:05:13 -- common/autotest_common.sh@10 -- # set +x 00:34:06.026 22:05:13 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:06.026 22:05:13 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:34:06.026 22:05:13 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:34:06.026 22:05:13 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:34:06.026 22:05:13 -- spdk/autotest.sh@398 -- # hostname 00:34:06.026 22:05:13 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:34:06.285 geninfo: WARNING: invalid characters removed from testname! 00:34:32.837 22:05:38 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:33.774 22:05:41 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:36.335 22:05:43 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:38.238 22:05:45 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:40.147 22:05:47 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:42.738 22:05:49 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:44.642 22:05:51 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:44.642 22:05:51 -- spdk/autorun.sh@1 -- $ timing_finish 00:34:44.642 22:05:51 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:34:44.642 22:05:51 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:44.642 22:05:51 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:44.642 22:05:51 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:44.642 + [[ -n 5249 ]] 00:34:44.642 + sudo kill 5249 00:34:44.652 [Pipeline] } 00:34:44.668 [Pipeline] // timeout 00:34:44.673 [Pipeline] } 00:34:44.687 [Pipeline] // stage 00:34:44.692 [Pipeline] } 00:34:44.706 [Pipeline] // catchError 00:34:44.717 [Pipeline] stage 00:34:44.719 [Pipeline] { (Stop VM) 00:34:44.731 [Pipeline] sh 00:34:45.014 + vagrant halt 00:34:48.305 ==> default: Halting domain... 00:34:54.894 [Pipeline] sh 00:34:55.180 + vagrant destroy -f 00:34:57.717 ==> default: Removing domain... 00:34:58.298 [Pipeline] sh 00:34:58.582 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:34:58.591 [Pipeline] } 00:34:58.606 [Pipeline] // stage 00:34:58.612 [Pipeline] } 00:34:58.626 [Pipeline] // dir 00:34:58.632 [Pipeline] } 00:34:58.646 [Pipeline] // wrap 00:34:58.652 [Pipeline] } 00:34:58.665 [Pipeline] // catchError 00:34:58.675 [Pipeline] stage 00:34:58.677 [Pipeline] { (Epilogue) 00:34:58.690 [Pipeline] sh 00:34:58.975 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:04.264 [Pipeline] catchError 00:35:04.265 [Pipeline] { 00:35:04.278 [Pipeline] sh 00:35:04.564 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:04.823 Artifacts sizes are good 00:35:04.832 [Pipeline] } 00:35:04.846 [Pipeline] // catchError 00:35:04.857 [Pipeline] archiveArtifacts 00:35:04.864 Archiving artifacts 00:35:04.989 [Pipeline] cleanWs 00:35:05.003 [WS-CLEANUP] Deleting project workspace... 00:35:05.003 [WS-CLEANUP] Deferred wipeout is used... 00:35:05.027 [WS-CLEANUP] done 00:35:05.029 [Pipeline] } 00:35:05.043 [Pipeline] // stage 00:35:05.048 [Pipeline] } 00:35:05.060 [Pipeline] // node 00:35:05.066 [Pipeline] End of Pipeline 00:35:05.108 Finished: SUCCESS